00:00:00.003 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2460 00:00:00.003 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3725 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.003 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.089 Fetching changes from the remote Git repository 00:00:00.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.124 Using shallow fetch with depth 1 00:00:00.124 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.124 > git --version # timeout=10 00:00:00.157 > git --version # 'git version 2.39.2' 00:00:00.157 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.113 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.125 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.136 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.136 > git config core.sparsecheckout # timeout=10 00:00:06.148 > git read-tree -mu HEAD # timeout=10 00:00:06.166 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.188 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.188 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.275 [Pipeline] Start of Pipeline 00:00:06.287 [Pipeline] library 00:00:06.289 Loading library shm_lib@master 00:00:06.289 Library shm_lib@master is cached. Copying from home. 00:00:06.309 [Pipeline] node 00:00:06.335 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.337 [Pipeline] { 00:00:06.344 [Pipeline] catchError 00:00:06.345 [Pipeline] { 00:00:06.355 [Pipeline] wrap 00:00:06.361 [Pipeline] { 00:00:06.367 [Pipeline] stage 00:00:06.368 [Pipeline] { (Prologue) 00:00:06.568 [Pipeline] sh 00:00:07.481 + logger -p user.info -t JENKINS-CI 00:00:07.516 [Pipeline] echo 00:00:07.518 Node: WFP4 00:00:07.526 [Pipeline] sh 00:00:07.862 [Pipeline] setCustomBuildProperty 00:00:07.875 [Pipeline] echo 00:00:07.877 Cleanup processes 00:00:07.882 [Pipeline] sh 00:00:08.173 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.173 6609 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.198 [Pipeline] sh 00:00:08.508 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.508 ++ grep -v 'sudo pgrep' 00:00:08.508 ++ awk '{print $1}' 00:00:08.508 + sudo kill -9 00:00:08.508 + true 00:00:08.523 [Pipeline] cleanWs 00:00:08.532 [WS-CLEANUP] Deleting project workspace... 00:00:08.532 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.544 [WS-CLEANUP] done 00:00:08.548 [Pipeline] setCustomBuildProperty 00:00:08.561 [Pipeline] sh 00:00:08.852 + sudo git config --global --replace-all safe.directory '*' 00:00:08.969 [Pipeline] httpRequest 00:00:10.742 [Pipeline] echo 00:00:10.744 Sorcerer 10.211.164.20 is alive 00:00:10.755 [Pipeline] retry 00:00:10.757 [Pipeline] { 00:00:10.772 [Pipeline] httpRequest 00:00:10.776 HttpMethod: GET 00:00:10.777 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.777 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.801 Response Code: HTTP/1.1 200 OK 00:00:10.801 Success: Status code 200 is in the accepted range: 200,404 00:00:10.801 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:39.916 [Pipeline] } 00:00:39.933 [Pipeline] // retry 00:00:39.941 [Pipeline] sh 00:00:40.231 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:40.249 [Pipeline] httpRequest 00:00:40.637 [Pipeline] echo 00:00:40.639 Sorcerer 10.211.164.20 is alive 00:00:40.649 [Pipeline] retry 00:00:40.651 [Pipeline] { 00:00:40.665 [Pipeline] httpRequest 00:00:40.671 HttpMethod: GET 00:00:40.671 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:40.672 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:40.688 Response Code: HTTP/1.1 200 OK 00:00:40.689 Success: Status code 200 is in the accepted range: 200,404 00:00:40.689 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:04.016 [Pipeline] } 00:01:04.034 [Pipeline] // retry 00:01:04.042 [Pipeline] sh 00:01:04.332 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:06.886 [Pipeline] sh 00:01:07.175 + git -C spdk log --oneline -n5 00:01:07.175 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:07.175 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:07.175 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:07.175 66289a6db build: use VERSION file for storing version 00:01:07.175 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:07.195 [Pipeline] withCredentials 00:01:07.206 > git --version # timeout=10 00:01:07.219 > git --version # 'git version 2.39.2' 00:01:07.244 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.246 [Pipeline] { 00:01:07.255 [Pipeline] retry 00:01:07.256 [Pipeline] { 00:01:07.271 [Pipeline] sh 00:01:07.801 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:08.075 [Pipeline] } 00:01:08.094 [Pipeline] // retry 00:01:08.099 [Pipeline] } 00:01:08.115 [Pipeline] // withCredentials 00:01:08.125 [Pipeline] httpRequest 00:01:08.499 [Pipeline] echo 00:01:08.501 Sorcerer 10.211.164.20 is alive 00:01:08.510 [Pipeline] retry 00:01:08.512 [Pipeline] { 00:01:08.526 [Pipeline] httpRequest 00:01:08.531 HttpMethod: GET 00:01:08.532 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.533 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.547 Response Code: HTTP/1.1 200 OK 00:01:08.547 Success: Status code 200 is in the accepted range: 200,404 00:01:08.548 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.413 [Pipeline] } 00:01:26.431 [Pipeline] // retry 00:01:26.439 [Pipeline] sh 00:01:26.728 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.133 [Pipeline] sh 00:01:28.425 + git -C dpdk log --oneline -n5 00:01:28.425 caf0f5d395 version: 22.11.4 00:01:28.425 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:28.425 dc9c799c7d vhost: fix missing spinlock unlock 00:01:28.425 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:28.425 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:28.436 [Pipeline] } 00:01:28.450 [Pipeline] // stage 00:01:28.459 [Pipeline] stage 00:01:28.461 [Pipeline] { (Prepare) 00:01:28.480 [Pipeline] writeFile 00:01:28.496 [Pipeline] sh 00:01:28.786 + logger -p user.info -t JENKINS-CI 00:01:28.806 [Pipeline] sh 00:01:29.092 + logger -p user.info -t JENKINS-CI 00:01:29.105 [Pipeline] sh 00:01:29.395 + cat autorun-spdk.conf 00:01:29.395 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.395 SPDK_TEST_NVMF=1 00:01:29.395 SPDK_TEST_NVME_CLI=1 00:01:29.395 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.395 SPDK_TEST_NVMF_NICS=e810 00:01:29.395 SPDK_TEST_VFIOUSER=1 00:01:29.395 SPDK_RUN_UBSAN=1 00:01:29.395 NET_TYPE=phy 00:01:29.395 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.395 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.404 RUN_NIGHTLY=1 00:01:29.409 [Pipeline] readFile 00:01:29.447 [Pipeline] withEnv 00:01:29.449 [Pipeline] { 00:01:29.462 [Pipeline] sh 00:01:29.752 + set -ex 00:01:29.752 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:29.752 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.752 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.752 ++ SPDK_TEST_NVMF=1 00:01:29.752 ++ SPDK_TEST_NVME_CLI=1 00:01:29.752 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.752 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.752 ++ SPDK_TEST_VFIOUSER=1 00:01:29.752 ++ SPDK_RUN_UBSAN=1 00:01:29.752 ++ NET_TYPE=phy 00:01:29.752 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.752 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.752 ++ RUN_NIGHTLY=1 00:01:29.752 + case $SPDK_TEST_NVMF_NICS in 00:01:29.752 + DRIVERS=ice 00:01:29.752 + [[ tcp == \r\d\m\a ]] 00:01:29.752 + [[ -n ice ]] 00:01:29.752 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:29.752 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:29.752 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:29.752 rmmod: ERROR: Module i40iw is not currently loaded 00:01:29.752 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:29.752 + true 00:01:29.752 + for D in $DRIVERS 00:01:29.752 + sudo modprobe ice 00:01:29.752 + exit 0 00:01:29.762 [Pipeline] } 00:01:29.777 [Pipeline] // withEnv 00:01:29.782 [Pipeline] } 00:01:29.795 [Pipeline] // stage 00:01:29.804 [Pipeline] catchError 00:01:29.806 [Pipeline] { 00:01:29.820 [Pipeline] timeout 00:01:29.820 Timeout set to expire in 1 hr 0 min 00:01:29.822 [Pipeline] { 00:01:29.836 [Pipeline] stage 00:01:29.838 [Pipeline] { (Tests) 00:01:29.852 [Pipeline] sh 00:01:30.142 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.142 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.142 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.142 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.142 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.142 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.142 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.142 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.142 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.142 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.142 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.142 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.142 + source /etc/os-release 00:01:30.142 ++ NAME='Fedora Linux' 00:01:30.142 ++ VERSION='39 (Cloud Edition)' 00:01:30.142 ++ ID=fedora 00:01:30.142 ++ VERSION_ID=39 00:01:30.142 ++ VERSION_CODENAME= 00:01:30.142 ++ PLATFORM_ID=platform:f39 00:01:30.142 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.142 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.142 ++ LOGO=fedora-logo-icon 00:01:30.142 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.142 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.142 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.142 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.142 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.142 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.142 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.142 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.142 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.142 ++ SUPPORT_END=2024-11-12 00:01:30.142 ++ VARIANT='Cloud Edition' 00:01:30.142 ++ VARIANT_ID=cloud 00:01:30.142 + uname -a 00:01:30.142 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:30.142 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:32.687 Hugepages 00:01:32.687 node hugesize free / total 00:01:32.687 node0 1048576kB 0 / 0 00:01:32.687 node0 2048kB 0 / 0 00:01:32.687 node1 1048576kB 0 / 0 00:01:32.687 node1 2048kB 0 / 0 00:01:32.687 00:01:32.687 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.687 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:32.687 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:32.687 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:32.687 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:32.687 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:32.687 + rm -f /tmp/spdk-ld-path 00:01:32.687 + source autorun-spdk.conf 00:01:32.687 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.687 ++ SPDK_TEST_NVMF=1 00:01:32.687 ++ SPDK_TEST_NVME_CLI=1 00:01:32.687 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.687 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.687 ++ SPDK_TEST_VFIOUSER=1 00:01:32.687 ++ SPDK_RUN_UBSAN=1 00:01:32.687 ++ NET_TYPE=phy 00:01:32.687 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:32.687 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.687 ++ RUN_NIGHTLY=1 00:01:32.687 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.687 + [[ -n '' ]] 00:01:32.687 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.687 + for M in /var/spdk/build-*-manifest.txt 00:01:32.687 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.687 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.687 + for M in /var/spdk/build-*-manifest.txt 00:01:32.687 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.687 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.687 + for M in /var/spdk/build-*-manifest.txt 00:01:32.687 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.687 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.687 ++ uname 00:01:32.687 + [[ Linux == \L\i\n\u\x ]] 00:01:32.687 + sudo dmesg -T 00:01:32.687 + sudo dmesg --clear 00:01:32.687 + dmesg_pid=7548 00:01:32.687 + [[ Fedora Linux == FreeBSD ]] 00:01:32.687 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.687 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.687 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.687 + sudo dmesg -Tw 00:01:32.687 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.687 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.687 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.687 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.687 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.687 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.687 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.687 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.687 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.687 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.687 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.687 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.687 22:10:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:32.687 22:10:53 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.687 22:10:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:32.687 22:10:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.687 22:10:53 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.949 22:10:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:32.949 22:10:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.949 22:10:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.949 22:10:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.949 22:10:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.949 22:10:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.949 22:10:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.949 22:10:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.949 22:10:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.949 22:10:53 -- paths/export.sh@5 -- $ export PATH 00:01:32.949 22:10:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.949 22:10:53 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.949 22:10:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:32.949 22:10:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734210653.XXXXXX 00:01:32.949 22:10:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734210653.N8C7wV 00:01:32.949 22:10:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:32.949 22:10:53 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:01:32.949 22:10:53 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.949 22:10:53 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:32.949 22:10:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.949 22:10:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.949 22:10:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:32.949 22:10:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:32.949 22:10:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.949 22:10:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:32.949 22:10:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:32.949 22:10:53 -- pm/common@17 -- $ local monitor 00:01:32.949 22:10:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.949 22:10:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.949 22:10:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.949 22:10:53 -- pm/common@21 -- $ date +%s 00:01:32.949 22:10:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.949 22:10:53 -- pm/common@21 -- $ date +%s 00:01:32.949 22:10:53 -- pm/common@25 -- $ sleep 1 00:01:32.949 22:10:53 -- pm/common@21 -- $ date +%s 00:01:32.949 22:10:53 -- pm/common@21 -- $ date +%s 00:01:32.949 22:10:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734210653 00:01:32.949 22:10:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734210653 00:01:32.949 22:10:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734210653 00:01:32.949 22:10:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734210653 00:01:32.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734210653_collect-cpu-temp.pm.log 00:01:32.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734210653_collect-cpu-load.pm.log 00:01:32.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734210653_collect-vmstat.pm.log 00:01:32.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734210653_collect-bmc-pm.bmc.pm.log 00:01:33.899 22:10:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:33.899 22:10:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.899 22:10:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.899 22:10:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.899 22:10:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.899 Sat Dec 14 09:10:54 PM UTC 2024 00:01:33.899 22:10:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.899 v25.01-rc1-2-ge01cb43b8 00:01:33.899 22:10:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.899 22:10:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.899 22:10:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.899 22:10:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.899 22:10:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.899 22:10:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.899 ************************************ 00:01:33.899 START TEST ubsan 00:01:33.899 ************************************ 00:01:33.899 22:10:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:33.899 using ubsan 00:01:33.899 00:01:33.899 real 0m0.000s 00:01:33.899 user 0m0.000s 00:01:33.899 sys 0m0.000s 00:01:33.899 22:10:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.899 22:10:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.899 ************************************ 00:01:33.899 END TEST ubsan 00:01:33.899 ************************************ 00:01:34.160 22:10:54 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:34.160 22:10:54 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:34.160 22:10:54 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:34.160 22:10:54 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:34.160 22:10:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.160 22:10:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.160 ************************************ 00:01:34.160 START TEST build_native_dpdk 00:01:34.160 ************************************ 00:01:34.160 22:10:54 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:34.160 22:10:54 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:34.161 caf0f5d395 version: 22.11.4 00:01:34.161 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:34.161 dc9c799c7d vhost: fix missing spinlock unlock 00:01:34.161 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:34.161 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:34.161 patching file config/rte_config.h 00:01:34.161 Hunk #1 succeeded at 60 (offset 1 line). 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:34.161 patching file lib/pcapng/rte_pcapng.c 00:01:34.161 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:34.161 22:10:54 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:34.161 22:10:54 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:40.758 The Meson build system 00:01:40.758 Version: 1.5.0 00:01:40.758 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.758 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:40.758 Build type: native build 00:01:40.758 Program cat found: YES (/usr/bin/cat) 00:01:40.758 Project name: DPDK 00:01:40.758 Project version: 22.11.4 00:01:40.758 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.758 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:40.758 Host machine cpu family: x86_64 00:01:40.758 Host machine cpu: x86_64 00:01:40.758 Message: ## Building in Developer Mode ## 00:01:40.758 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.758 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:40.758 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.758 Program objdump found: YES (/usr/bin/objdump) 00:01:40.758 Program python3 found: YES (/usr/bin/python3) 00:01:40.758 Program cat found: YES (/usr/bin/cat) 00:01:40.758 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:40.758 Checking for size of "void *" : 8 00:01:40.758 Checking for size of "void *" : 8 (cached) 00:01:40.758 Library m found: YES 00:01:40.758 Library numa found: YES 00:01:40.758 Has header "numaif.h" : YES 00:01:40.759 Library fdt found: NO 00:01:40.759 Library execinfo found: NO 00:01:40.759 Has header "execinfo.h" : YES 00:01:40.759 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.759 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.759 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.759 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.759 Run-time dependency openssl found: YES 3.1.1 00:01:40.759 Run-time dependency libpcap found: YES 1.10.4 00:01:40.759 Has header "pcap.h" with dependency libpcap: YES 00:01:40.759 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.759 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.759 Compiler for C supports arguments -Wformat: YES 00:01:40.759 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.759 Compiler for C supports arguments -Wformat-security: NO 00:01:40.759 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.759 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.759 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.759 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.759 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.759 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.759 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.759 Compiler for C supports arguments -Wundef: YES 00:01:40.759 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.759 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.759 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.759 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.759 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.759 Compiler for C supports arguments -mavx512f: YES 00:01:40.759 Checking if "AVX512 checking" compiles: YES 00:01:40.759 Fetching value of define "__SSE4_2__" : 1 00:01:40.759 Fetching value of define "__AES__" : 1 00:01:40.759 Fetching value of define "__AVX__" : 1 00:01:40.759 Fetching value of define "__AVX2__" : 1 00:01:40.759 Fetching value of define "__AVX512BW__" : 1 00:01:40.759 Fetching value of define "__AVX512CD__" : 1 00:01:40.759 Fetching value of define "__AVX512DQ__" : 1 00:01:40.759 Fetching value of define "__AVX512F__" : 1 00:01:40.759 Fetching value of define "__AVX512VL__" : 1 00:01:40.759 Fetching value of define "__PCLMUL__" : 1 00:01:40.759 Fetching value of define "__RDRND__" : 1 00:01:40.759 Fetching value of define "__RDSEED__" : 1 00:01:40.759 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.759 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.759 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.759 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.759 Checking for function "getentropy" : YES 00:01:40.759 Message: lib/eal: Defining dependency "eal" 00:01:40.759 Message: lib/ring: Defining dependency "ring" 00:01:40.759 Message: lib/rcu: Defining dependency "rcu" 00:01:40.759 Message: lib/mempool: Defining dependency "mempool" 00:01:40.759 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.759 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.759 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:40.759 Compiler for C supports arguments -mpclmul: YES 00:01:40.759 Compiler for C supports arguments -maes: YES 00:01:40.759 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.759 Compiler for C supports arguments -mavx512bw: YES 00:01:40.759 Compiler for C supports arguments -mavx512dq: YES 00:01:40.759 Compiler for C supports arguments -mavx512vl: YES 00:01:40.759 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.759 Compiler for C supports arguments -mavx2: YES 00:01:40.759 Compiler for C supports arguments -mavx: YES 00:01:40.759 Message: lib/net: Defining dependency "net" 00:01:40.759 Message: lib/meter: Defining dependency "meter" 00:01:40.759 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.759 Message: lib/pci: Defining dependency "pci" 00:01:40.759 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.759 Message: lib/metrics: Defining dependency "metrics" 00:01:40.759 Message: lib/hash: Defining dependency "hash" 00:01:40.759 Message: lib/timer: Defining dependency "timer" 00:01:40.759 Fetching value of define "__AVX2__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.759 Message: lib/acl: Defining dependency "acl" 00:01:40.759 Message: lib/bbdev: Defining dependency "bbdev" 00:01:40.759 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:40.759 Run-time dependency libelf found: YES 0.191 00:01:40.759 Message: lib/bpf: Defining dependency "bpf" 00:01:40.759 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:40.759 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.759 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.759 Message: lib/distributor: Defining dependency "distributor" 00:01:40.759 Message: lib/efd: Defining dependency "efd" 00:01:40.759 Message: lib/eventdev: Defining dependency "eventdev" 00:01:40.759 Message: lib/gpudev: Defining dependency "gpudev" 00:01:40.759 Message: lib/gro: Defining dependency "gro" 00:01:40.759 Message: lib/gso: Defining dependency "gso" 00:01:40.759 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:40.759 Message: lib/jobstats: Defining dependency "jobstats" 00:01:40.759 Message: lib/latencystats: Defining dependency "latencystats" 00:01:40.759 Message: lib/lpm: Defining dependency "lpm" 00:01:40.759 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:40.759 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:40.759 Message: lib/member: Defining dependency "member" 00:01:40.759 Message: lib/pcapng: Defining dependency "pcapng" 00:01:40.759 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.759 Message: lib/power: Defining dependency "power" 00:01:40.759 Message: lib/rawdev: Defining dependency "rawdev" 00:01:40.759 Message: lib/regexdev: Defining dependency "regexdev" 00:01:40.759 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.759 Message: lib/rib: Defining dependency "rib" 00:01:40.759 Message: lib/reorder: Defining dependency "reorder" 00:01:40.759 Message: lib/sched: Defining dependency "sched" 00:01:40.759 Message: lib/security: Defining dependency "security" 00:01:40.759 Message: lib/stack: Defining dependency "stack" 00:01:40.759 Has header "linux/userfaultfd.h" : YES 00:01:40.759 Message: lib/vhost: Defining dependency "vhost" 00:01:40.759 Message: lib/ipsec: Defining dependency "ipsec" 00:01:40.759 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.759 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.759 Message: lib/fib: Defining dependency "fib" 00:01:40.759 Message: lib/port: Defining dependency "port" 00:01:40.759 Message: lib/pdump: Defining dependency "pdump" 00:01:40.759 Message: lib/table: Defining dependency "table" 00:01:40.759 Message: lib/pipeline: Defining dependency "pipeline" 00:01:40.759 Message: lib/graph: Defining dependency "graph" 00:01:40.759 Message: lib/node: Defining dependency "node" 00:01:40.759 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.759 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.759 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.759 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.759 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:40.759 Compiler for C supports arguments -Wno-unused-value: YES 00:01:40.759 Compiler for C supports arguments -Wno-format: YES 00:01:40.759 Compiler for C supports arguments -Wno-format-security: YES 00:01:40.759 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:41.329 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:41.330 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:41.330 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:41.330 Fetching value of define "__AVX2__" : 1 (cached) 00:01:41.330 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:41.330 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:41.330 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.330 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:41.330 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:41.330 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:41.330 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:41.330 Configuring doxy-api.conf using configuration 00:01:41.330 Program sphinx-build found: NO 00:01:41.330 Configuring rte_build_config.h using configuration 00:01:41.330 Message: 00:01:41.330 ================= 00:01:41.330 Applications Enabled 00:01:41.330 ================= 00:01:41.330 00:01:41.330 apps: 00:01:41.330 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:41.330 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:41.330 test-security-perf, 00:01:41.330 00:01:41.330 Message: 00:01:41.330 ================= 00:01:41.330 Libraries Enabled 00:01:41.330 ================= 00:01:41.330 00:01:41.330 libs: 00:01:41.330 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:41.330 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:41.330 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:41.330 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:41.330 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:41.330 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:41.330 table, pipeline, graph, node, 00:01:41.330 00:01:41.330 Message: 00:01:41.330 =============== 00:01:41.330 Drivers Enabled 00:01:41.330 =============== 00:01:41.330 00:01:41.330 common: 00:01:41.330 00:01:41.330 bus: 00:01:41.330 pci, vdev, 00:01:41.330 mempool: 00:01:41.330 ring, 00:01:41.330 dma: 00:01:41.330 00:01:41.330 net: 00:01:41.330 i40e, 00:01:41.330 raw: 00:01:41.330 00:01:41.330 crypto: 00:01:41.330 00:01:41.330 compress: 00:01:41.330 00:01:41.330 regex: 00:01:41.330 00:01:41.330 vdpa: 00:01:41.330 00:01:41.330 event: 00:01:41.330 00:01:41.330 baseband: 00:01:41.330 00:01:41.330 gpu: 00:01:41.330 00:01:41.330 00:01:41.330 Message: 00:01:41.330 ================= 00:01:41.330 Content Skipped 00:01:41.330 ================= 00:01:41.330 00:01:41.330 apps: 00:01:41.330 00:01:41.330 libs: 00:01:41.330 kni: explicitly disabled via build config (deprecated lib) 00:01:41.330 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:41.330 00:01:41.330 drivers: 00:01:41.330 common/cpt: not in enabled drivers build config 00:01:41.330 common/dpaax: not in enabled drivers build config 00:01:41.330 common/iavf: not in enabled drivers build config 00:01:41.330 common/idpf: not in enabled drivers build config 00:01:41.330 common/mvep: not in enabled drivers build config 00:01:41.330 common/octeontx: not in enabled drivers build config 00:01:41.330 bus/auxiliary: not in enabled drivers build config 00:01:41.330 bus/dpaa: not in enabled drivers build config 00:01:41.330 bus/fslmc: not in enabled drivers build config 00:01:41.330 bus/ifpga: not in enabled drivers build config 00:01:41.330 bus/vmbus: not in enabled drivers build config 00:01:41.330 common/cnxk: not in enabled drivers build config 00:01:41.330 common/mlx5: not in enabled drivers build config 00:01:41.330 common/qat: not in enabled drivers build config 00:01:41.330 common/sfc_efx: not in enabled drivers build config 00:01:41.330 mempool/bucket: not in enabled drivers build config 00:01:41.330 mempool/cnxk: not in enabled drivers build config 00:01:41.330 mempool/dpaa: not in enabled drivers build config 00:01:41.330 mempool/dpaa2: not in enabled drivers build config 00:01:41.330 mempool/octeontx: not in enabled drivers build config 00:01:41.330 mempool/stack: not in enabled drivers build config 00:01:41.330 dma/cnxk: not in enabled drivers build config 00:01:41.330 dma/dpaa: not in enabled drivers build config 00:01:41.330 dma/dpaa2: not in enabled drivers build config 00:01:41.330 dma/hisilicon: not in enabled drivers build config 00:01:41.330 dma/idxd: not in enabled drivers build config 00:01:41.330 dma/ioat: not in enabled drivers build config 00:01:41.330 dma/skeleton: not in enabled drivers build config 00:01:41.330 net/af_packet: not in enabled drivers build config 00:01:41.330 net/af_xdp: not in enabled drivers build config 00:01:41.330 net/ark: not in enabled drivers build config 00:01:41.330 net/atlantic: not in enabled drivers build config 00:01:41.330 net/avp: not in enabled drivers build config 00:01:41.330 net/axgbe: not in enabled drivers build config 00:01:41.330 net/bnx2x: not in enabled drivers build config 00:01:41.330 net/bnxt: not in enabled drivers build config 00:01:41.330 net/bonding: not in enabled drivers build config 00:01:41.330 net/cnxk: not in enabled drivers build config 00:01:41.330 net/cxgbe: not in enabled drivers build config 00:01:41.330 net/dpaa: not in enabled drivers build config 00:01:41.330 net/dpaa2: not in enabled drivers build config 00:01:41.330 net/e1000: not in enabled drivers build config 00:01:41.330 net/ena: not in enabled drivers build config 00:01:41.330 net/enetc: not in enabled drivers build config 00:01:41.330 net/enetfec: not in enabled drivers build config 00:01:41.330 net/enic: not in enabled drivers build config 00:01:41.330 net/failsafe: not in enabled drivers build config 00:01:41.330 net/fm10k: not in enabled drivers build config 00:01:41.330 net/gve: not in enabled drivers build config 00:01:41.330 net/hinic: not in enabled drivers build config 00:01:41.330 net/hns3: not in enabled drivers build config 00:01:41.330 net/iavf: not in enabled drivers build config 00:01:41.330 net/ice: not in enabled drivers build config 00:01:41.330 net/idpf: not in enabled drivers build config 00:01:41.330 net/igc: not in enabled drivers build config 00:01:41.330 net/ionic: not in enabled drivers build config 00:01:41.330 net/ipn3ke: not in enabled drivers build config 00:01:41.330 net/ixgbe: not in enabled drivers build config 00:01:41.330 net/kni: not in enabled drivers build config 00:01:41.330 net/liquidio: not in enabled drivers build config 00:01:41.330 net/mana: not in enabled drivers build config 00:01:41.330 net/memif: not in enabled drivers build config 00:01:41.330 net/mlx4: not in enabled drivers build config 00:01:41.330 net/mlx5: not in enabled drivers build config 00:01:41.330 net/mvneta: not in enabled drivers build config 00:01:41.330 net/mvpp2: not in enabled drivers build config 00:01:41.330 net/netvsc: not in enabled drivers build config 00:01:41.330 net/nfb: not in enabled drivers build config 00:01:41.330 net/nfp: not in enabled drivers build config 00:01:41.330 net/ngbe: not in enabled drivers build config 00:01:41.330 net/null: not in enabled drivers build config 00:01:41.330 net/octeontx: not in enabled drivers build config 00:01:41.330 net/octeon_ep: not in enabled drivers build config 00:01:41.330 net/pcap: not in enabled drivers build config 00:01:41.330 net/pfe: not in enabled drivers build config 00:01:41.330 net/qede: not in enabled drivers build config 00:01:41.330 net/ring: not in enabled drivers build config 00:01:41.330 net/sfc: not in enabled drivers build config 00:01:41.330 net/softnic: not in enabled drivers build config 00:01:41.330 net/tap: not in enabled drivers build config 00:01:41.330 net/thunderx: not in enabled drivers build config 00:01:41.330 net/txgbe: not in enabled drivers build config 00:01:41.330 net/vdev_netvsc: not in enabled drivers build config 00:01:41.330 net/vhost: not in enabled drivers build config 00:01:41.330 net/virtio: not in enabled drivers build config 00:01:41.330 net/vmxnet3: not in enabled drivers build config 00:01:41.330 raw/cnxk_bphy: not in enabled drivers build config 00:01:41.330 raw/cnxk_gpio: not in enabled drivers build config 00:01:41.330 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:41.330 raw/ifpga: not in enabled drivers build config 00:01:41.330 raw/ntb: not in enabled drivers build config 00:01:41.330 raw/skeleton: not in enabled drivers build config 00:01:41.330 crypto/armv8: not in enabled drivers build config 00:01:41.330 crypto/bcmfs: not in enabled drivers build config 00:01:41.330 crypto/caam_jr: not in enabled drivers build config 00:01:41.330 crypto/ccp: not in enabled drivers build config 00:01:41.330 crypto/cnxk: not in enabled drivers build config 00:01:41.330 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.330 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.330 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.330 crypto/mlx5: not in enabled drivers build config 00:01:41.330 crypto/mvsam: not in enabled drivers build config 00:01:41.330 crypto/nitrox: not in enabled drivers build config 00:01:41.330 crypto/null: not in enabled drivers build config 00:01:41.330 crypto/octeontx: not in enabled drivers build config 00:01:41.330 crypto/openssl: not in enabled drivers build config 00:01:41.330 crypto/scheduler: not in enabled drivers build config 00:01:41.330 crypto/uadk: not in enabled drivers build config 00:01:41.330 crypto/virtio: not in enabled drivers build config 00:01:41.330 compress/isal: not in enabled drivers build config 00:01:41.330 compress/mlx5: not in enabled drivers build config 00:01:41.330 compress/octeontx: not in enabled drivers build config 00:01:41.330 compress/zlib: not in enabled drivers build config 00:01:41.330 regex/mlx5: not in enabled drivers build config 00:01:41.330 regex/cn9k: not in enabled drivers build config 00:01:41.330 vdpa/ifc: not in enabled drivers build config 00:01:41.330 vdpa/mlx5: not in enabled drivers build config 00:01:41.330 vdpa/sfc: not in enabled drivers build config 00:01:41.330 event/cnxk: not in enabled drivers build config 00:01:41.330 event/dlb2: not in enabled drivers build config 00:01:41.330 event/dpaa: not in enabled drivers build config 00:01:41.330 event/dpaa2: not in enabled drivers build config 00:01:41.330 event/dsw: not in enabled drivers build config 00:01:41.330 event/opdl: not in enabled drivers build config 00:01:41.330 event/skeleton: not in enabled drivers build config 00:01:41.330 event/sw: not in enabled drivers build config 00:01:41.330 event/octeontx: not in enabled drivers build config 00:01:41.330 baseband/acc: not in enabled drivers build config 00:01:41.330 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:41.331 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:41.331 baseband/la12xx: not in enabled drivers build config 00:01:41.331 baseband/null: not in enabled drivers build config 00:01:41.331 baseband/turbo_sw: not in enabled drivers build config 00:01:41.331 gpu/cuda: not in enabled drivers build config 00:01:41.331 00:01:41.331 00:01:41.331 Build targets in project: 311 00:01:41.331 00:01:41.331 DPDK 22.11.4 00:01:41.331 00:01:41.331 User defined options 00:01:41.331 libdir : lib 00:01:41.331 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.331 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:41.331 c_link_args : 00:01:41.331 enable_docs : false 00:01:41.331 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:41.331 enable_kmods : false 00:01:41.331 machine : native 00:01:41.331 tests : false 00:01:41.331 00:01:41.331 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.331 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:41.331 22:11:02 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:41.331 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:41.331 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:41.331 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:41.331 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:41.598 [4/740] Generating lib/rte_telemetry_def with a custom command 00:01:41.598 [5/740] Generating lib/rte_mempool_def with a custom command 00:01:41.598 [6/740] Generating lib/rte_eal_def with a custom command 00:01:41.598 [7/740] Generating lib/rte_rcu_mingw with a custom command 00:01:41.598 [8/740] Generating lib/rte_ring_mingw with a custom command 00:01:41.598 [9/740] Generating lib/rte_ring_def with a custom command 00:01:41.598 [10/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:41.598 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.598 [12/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.598 [13/740] Generating lib/rte_eal_mingw with a custom command 00:01:41.598 [14/740] Generating lib/rte_rcu_def with a custom command 00:01:41.598 [15/740] Generating lib/rte_mempool_mingw with a custom command 00:01:41.598 [16/740] Generating lib/rte_mbuf_def with a custom command 00:01:41.598 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.598 [18/740] Generating lib/rte_net_def with a custom command 00:01:41.598 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.598 [20/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.598 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.598 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.598 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.598 [24/740] Generating lib/rte_meter_mingw with a custom command 00:01:41.598 [25/740] Generating lib/rte_net_mingw with a custom command 00:01:41.598 [26/740] Generating lib/rte_meter_def with a custom command 00:01:41.598 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.598 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.598 [29/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.598 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.598 [31/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.598 [32/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:41.598 [33/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.598 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.598 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.598 [36/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.598 [37/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.598 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.598 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.598 [40/740] Linking static target lib/librte_kvargs.a 00:01:41.598 [41/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.598 [42/740] Generating lib/rte_ethdev_def with a custom command 00:01:41.598 [43/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:41.598 [44/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.598 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.598 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.598 [47/740] Generating lib/rte_pci_def with a custom command 00:01:41.598 [48/740] Generating lib/rte_pci_mingw with a custom command 00:01:41.598 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.598 [50/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:41.598 [51/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.598 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.598 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.598 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.598 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.598 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.598 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.598 [58/740] Generating lib/rte_cmdline_def with a custom command 00:01:41.598 [59/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:41.598 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.598 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.864 [62/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:41.864 [63/740] Generating lib/rte_metrics_def with a custom command 00:01:41.864 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.864 [65/740] Linking static target lib/librte_ring.a 00:01:41.864 [66/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:41.864 [67/740] Generating lib/rte_metrics_mingw with a custom command 00:01:41.864 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.864 [69/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.864 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.864 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.864 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.864 [73/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:41.864 [74/740] Generating lib/rte_timer_def with a custom command 00:01:41.864 [75/740] Linking static target lib/librte_pci.a 00:01:41.864 [76/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.864 [77/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.864 [78/740] Generating lib/rte_hash_def with a custom command 00:01:41.864 [79/740] Generating lib/rte_hash_mingw with a custom command 00:01:41.864 [80/740] Generating lib/rte_timer_mingw with a custom command 00:01:41.864 [81/740] Linking static target lib/librte_meter.a 00:01:41.864 [82/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.864 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:41.864 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.864 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.864 [86/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:41.864 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.864 [88/740] Generating lib/rte_acl_def with a custom command 00:01:41.864 [89/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.864 [90/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.864 [91/740] Generating lib/rte_acl_mingw with a custom command 00:01:41.864 [92/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:41.864 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.864 [94/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:41.864 [95/740] Generating lib/rte_bitratestats_def with a custom command 00:01:41.864 [96/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:41.864 [97/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:41.864 [98/740] Generating lib/rte_bbdev_def with a custom command 00:01:41.864 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.864 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.864 [101/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:41.864 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:41.864 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.864 [104/740] Generating lib/rte_bpf_def with a custom command 00:01:41.864 [105/740] Generating lib/rte_bpf_mingw with a custom command 00:01:41.864 [106/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.864 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.864 [108/740] Generating lib/rte_cfgfile_def with a custom command 00:01:41.864 [109/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:41.864 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.864 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.864 [112/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.864 [113/740] Generating lib/rte_compressdev_def with a custom command 00:01:41.864 [114/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:41.864 [115/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:41.864 [116/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:41.864 [117/740] Generating lib/rte_cryptodev_def with a custom command 00:01:41.864 [118/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:41.864 [119/740] Generating lib/rte_distributor_def with a custom command 00:01:41.864 [120/740] Generating lib/rte_distributor_mingw with a custom command 00:01:41.864 [121/740] Generating lib/rte_efd_def with a custom command 00:01:41.864 [122/740] Generating lib/rte_efd_mingw with a custom command 00:01:41.864 [123/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.132 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.132 [125/740] Generating lib/rte_eventdev_def with a custom command 00:01:42.132 [126/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:42.132 [127/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:42.132 [128/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.132 [129/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.132 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.132 [131/740] Generating lib/rte_gpudev_def with a custom command 00:01:42.132 [132/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.132 [133/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:42.132 [134/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.132 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:42.132 [136/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.132 [137/740] Linking target lib/librte_kvargs.so.23.0 00:01:42.132 [138/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.132 [139/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.132 [140/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.132 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:42.132 [142/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.132 [143/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:42.132 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.132 [145/740] Generating lib/rte_gro_def with a custom command 00:01:42.132 [146/740] Generating lib/rte_gro_mingw with a custom command 00:01:42.132 [147/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.132 [148/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:42.132 [149/740] Generating lib/rte_gso_mingw with a custom command 00:01:42.395 [150/740] Generating lib/rte_gso_def with a custom command 00:01:42.395 [151/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.395 [152/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.395 [153/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:42.395 [154/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.395 [155/740] Linking static target lib/librte_cfgfile.a 00:01:42.395 [156/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.396 [157/740] Generating lib/rte_ip_frag_def with a custom command 00:01:42.396 [158/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:42.396 [159/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:42.396 [160/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.396 [161/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:42.396 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.396 [163/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:42.396 [164/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.396 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:42.396 [166/740] Generating lib/rte_jobstats_def with a custom command 00:01:42.396 [167/740] Generating lib/rte_latencystats_def with a custom command 00:01:42.396 [168/740] Generating lib/rte_lpm_def with a custom command 00:01:42.396 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.396 [170/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:42.396 [171/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.396 [172/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:42.396 [173/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:42.396 [174/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.396 [175/740] Generating lib/rte_lpm_mingw with a custom command 00:01:42.396 [176/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:42.396 [177/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:42.396 [178/740] Generating lib/rte_member_mingw with a custom command 00:01:42.396 [179/740] Generating lib/rte_member_def with a custom command 00:01:42.396 [180/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.396 [181/740] Linking static target lib/librte_cmdline.a 00:01:42.396 [182/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:42.396 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.396 [184/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:42.396 [185/740] Linking static target lib/librte_timer.a 00:01:42.396 [186/740] Linking static target lib/librte_metrics.a 00:01:42.396 [187/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:42.396 [188/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:42.396 [189/740] Generating lib/rte_pcapng_def with a custom command 00:01:42.667 [190/740] Linking static target lib/librte_bitratestats.a 00:01:42.667 [191/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:42.667 [192/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:42.667 [193/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:42.667 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:42.667 [195/740] Linking static target lib/librte_telemetry.a 00:01:42.667 [196/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.667 [197/740] Linking static target lib/librte_jobstats.a 00:01:42.667 [198/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.667 [199/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.667 [200/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:42.667 [201/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:42.667 [202/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.667 [203/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:42.667 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:42.667 [205/740] Linking static target lib/librte_net.a 00:01:42.667 [206/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.667 [207/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.667 [208/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.667 [209/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.667 [210/740] Generating lib/rte_power_mingw with a custom command 00:01:42.667 [211/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.667 [212/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.667 [213/740] Generating lib/rte_power_def with a custom command 00:01:42.667 [214/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:42.668 [215/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:42.668 [216/740] Generating lib/rte_rawdev_def with a custom command 00:01:42.668 [217/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:42.668 [218/740] Generating lib/rte_regexdev_def with a custom command 00:01:42.668 [219/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:42.668 [220/740] Generating lib/rte_dmadev_def with a custom command 00:01:42.668 [221/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.668 [222/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.668 [223/740] Generating lib/rte_rib_def with a custom command 00:01:42.668 [224/740] Generating lib/rte_rib_mingw with a custom command 00:01:42.668 [225/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:42.668 [226/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.668 [227/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:42.668 [228/740] Generating lib/rte_reorder_def with a custom command 00:01:42.668 [229/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.668 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.668 [231/740] Generating lib/rte_reorder_mingw with a custom command 00:01:42.668 [232/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.668 [233/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:42.668 [234/740] Generating lib/rte_sched_mingw with a custom command 00:01:42.668 [235/740] Generating lib/rte_sched_def with a custom command 00:01:42.668 [236/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:42.668 [237/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.668 [238/740] Generating lib/rte_security_mingw with a custom command 00:01:42.668 [239/740] Generating lib/rte_security_def with a custom command 00:01:42.668 [240/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:42.668 [241/740] Generating lib/rte_stack_def with a custom command 00:01:42.933 [242/740] Linking static target lib/librte_compressdev.a 00:01:42.933 [243/740] Generating lib/rte_stack_mingw with a custom command 00:01:42.933 [244/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:42.933 [245/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.933 [246/740] Generating lib/rte_vhost_def with a custom command 00:01:42.933 [247/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.933 [248/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:42.933 [249/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.933 [250/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:42.933 [251/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:42.933 [252/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:42.933 [253/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.933 [254/740] Generating lib/rte_vhost_mingw with a custom command 00:01:42.933 [255/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:42.934 [256/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:42.934 [257/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:42.934 [258/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.934 [259/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:42.934 [260/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.934 [261/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:42.934 [262/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:42.934 [263/740] Linking static target lib/librte_stack.a 00:01:42.934 [264/740] Generating lib/rte_ipsec_def with a custom command 00:01:42.934 [265/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.934 [266/740] Generating lib/rte_fib_mingw with a custom command 00:01:42.934 [267/740] Generating lib/rte_fib_def with a custom command 00:01:42.934 [268/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:42.934 [269/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.934 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:42.934 [271/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.199 [272/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.199 [273/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.199 [274/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:43.199 [275/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:43.199 [276/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:43.199 [277/740] Linking static target lib/librte_rcu.a 00:01:43.199 [278/740] Linking static target lib/librte_mempool.a 00:01:43.199 [279/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [280/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [281/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:43.199 [282/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [283/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:43.199 [284/740] Generating lib/rte_port_def with a custom command 00:01:43.199 [285/740] Linking static target lib/librte_bbdev.a 00:01:43.199 [286/740] Generating lib/rte_port_mingw with a custom command 00:01:43.199 [287/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.199 [288/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [289/740] Generating lib/rte_pdump_def with a custom command 00:01:43.199 [290/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:43.199 [291/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.199 [292/740] Linking static target lib/librte_rawdev.a 00:01:43.199 [293/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [294/740] Linking static target lib/librte_dmadev.a 00:01:43.199 [295/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:43.199 [296/740] Generating lib/rte_pdump_mingw with a custom command 00:01:43.199 [297/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:43.199 [298/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:43.199 [299/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:43.199 [300/740] Linking static target lib/librte_gro.a 00:01:43.199 [301/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:43.199 [302/740] Linking target lib/librte_telemetry.so.23.0 00:01:43.199 [303/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:43.199 [304/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:43.199 [305/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.199 [306/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:43.199 [307/740] Linking static target lib/librte_gpudev.a 00:01:43.473 [308/740] Linking static target lib/librte_gso.a 00:01:43.473 [309/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:43.473 [310/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:43.473 [311/740] Linking static target lib/librte_distributor.a 00:01:43.473 [312/740] Linking static target lib/librte_latencystats.a 00:01:43.473 [313/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:43.473 [314/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.473 [315/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:43.473 [316/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:43.473 [317/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:43.473 [318/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:43.473 [319/740] Linking static target lib/librte_eal.a 00:01:43.473 [320/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:43.473 [321/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:43.473 [322/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:43.473 [323/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:43.473 [324/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.473 [325/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.473 [326/740] Linking static target lib/librte_mbuf.a 00:01:43.473 [327/740] Generating lib/rte_table_def with a custom command 00:01:43.473 [328/740] Generating lib/rte_table_mingw with a custom command 00:01:43.737 [329/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:43.737 [330/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:43.737 [331/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:43.737 [332/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.737 [333/740] Linking static target lib/librte_regexdev.a 00:01:43.737 [334/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:43.737 [335/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:43.737 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:43.737 [337/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.737 [338/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:43.737 [339/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.737 [340/740] Linking static target lib/librte_ip_frag.a 00:01:43.737 [341/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:43.737 [342/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:43.737 [343/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:43.737 [344/740] Generating lib/rte_pipeline_def with a custom command 00:01:43.737 [345/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:43.737 [346/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:43.737 [347/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:43.737 [348/740] Linking static target lib/librte_pcapng.a 00:01:43.737 [349/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:43.737 [350/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:43.737 [351/740] Linking static target lib/librte_security.a 00:01:43.737 [352/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:43.737 [353/740] Generating lib/rte_graph_def with a custom command 00:01:43.737 [354/740] Generating lib/rte_graph_mingw with a custom command 00:01:43.737 [355/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.737 [356/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:43.737 [357/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:43.737 [358/740] Linking static target lib/librte_power.a 00:01:43.737 [359/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:43.737 [360/740] Linking static target lib/librte_reorder.a 00:01:44.005 [361/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.005 [362/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:44.005 [363/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:44.005 [364/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:44.005 [365/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:44.005 [366/740] Generating lib/rte_node_def with a custom command 00:01:44.005 [367/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:44.005 [368/740] Generating lib/rte_node_mingw with a custom command 00:01:44.005 [369/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:44.005 [370/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:44.005 [371/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:44.005 [372/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.005 [373/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.005 [374/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:44.005 [375/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:44.005 [376/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:44.005 [377/740] Linking static target lib/librte_lpm.a 00:01:44.005 [378/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:44.005 [379/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.005 [380/740] Linking static target lib/librte_bpf.a 00:01:44.005 [381/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.265 [382/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:44.265 [383/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:44.265 [384/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:44.265 [385/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [386/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:44.265 [387/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:44.265 [388/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:44.265 [389/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:44.265 [390/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [391/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:44.265 [392/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [393/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:44.265 [394/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [395/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:44.265 [396/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.265 [397/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.265 [398/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [399/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:44.265 [400/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:44.265 [401/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:44.265 [402/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:44.265 [403/740] Linking static target lib/librte_rib.a 00:01:44.265 [404/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:44.265 [405/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [406/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.265 [407/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.265 [408/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:44.265 [409/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:44.265 [410/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:44.265 [411/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.265 [412/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:44.527 [413/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:44.527 [414/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:44.527 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:44.527 [416/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:44.527 [417/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:44.527 [418/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.527 [419/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:44.527 [420/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.527 [421/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.527 [422/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:44.527 [423/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.527 [424/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:44.527 [425/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:44.527 [426/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:44.527 [427/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:44.527 [428/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:44.527 [429/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:44.527 [430/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:44.527 [431/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:44.527 [432/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.527 [433/740] Linking static target lib/librte_efd.a 00:01:44.527 [434/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:44.527 [435/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:44.527 [436/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:44.527 [437/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:44.527 [438/740] Linking static target lib/librte_graph.a 00:01:44.527 [439/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.792 [440/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:44.792 [441/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.792 [442/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.792 [443/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:44.793 [444/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:44.793 [445/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.793 [446/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:44.793 [447/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:44.793 [448/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.793 [449/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:44.793 [450/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.793 [451/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.793 [452/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.793 [453/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:44.793 [454/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:44.793 [455/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.793 [456/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:45.062 [457/740] Linking static target lib/librte_fib.a 00:01:45.062 [458/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.062 [459/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:45.062 [460/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:45.062 [461/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.062 [462/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:45.062 [463/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:45.062 [464/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.062 [465/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.062 [466/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:45.062 [467/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.062 [468/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:45.062 [469/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.062 [470/740] Linking static target drivers/librte_bus_vdev.a 00:01:45.331 [471/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:45.331 [472/740] Linking static target lib/librte_pdump.a 00:01:45.331 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:45.331 [474/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:45.331 [475/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:45.331 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.331 [477/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:45.331 [478/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:45.331 [479/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.331 [480/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.331 [481/740] Linking static target drivers/librte_bus_pci.a 00:01:45.331 [482/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.331 [483/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:45.331 [484/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:45.331 [485/740] Linking static target lib/librte_table.a 00:01:45.331 [486/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:45.604 [487/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:45.604 [488/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:45.604 [489/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:45.604 [490/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:45.604 [491/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:45.604 [492/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:45.604 [493/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.604 [494/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:45.604 [495/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:45.604 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:45.604 [497/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:45.604 [498/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:45.604 [499/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:45.604 [500/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:45.866 [501/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.866 [502/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:45.866 [503/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:45.866 [504/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.866 [505/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:45.866 [506/740] Linking static target lib/librte_cryptodev.a 00:01:45.866 [507/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:45.866 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:45.866 [509/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:45.866 [510/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.866 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:45.866 [512/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:45.866 [513/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:45.866 [514/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.866 [515/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.866 [516/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:45.866 [517/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:45.866 [518/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:45.866 [519/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:45.866 [520/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:45.866 [521/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:45.866 [522/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:45.866 [523/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:46.127 [524/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:46.127 [525/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:46.127 [526/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:46.127 [527/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:46.127 [528/740] Linking static target lib/librte_node.a 00:01:46.127 [529/740] Linking static target lib/librte_sched.a 00:01:46.127 [530/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:46.127 [531/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:46.127 [532/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:46.127 [533/740] Linking static target lib/librte_member.a 00:01:46.127 [534/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:46.127 [535/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.127 [536/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:46.127 [537/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:46.127 [538/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:46.127 [539/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.127 [540/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:46.127 [541/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:46.127 [542/740] Linking static target lib/librte_ethdev.a 00:01:46.127 [543/740] Linking static target lib/librte_ipsec.a 00:01:46.127 [544/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:46.127 [545/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.127 [546/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:46.127 [547/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:46.386 [548/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:46.386 [549/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:46.386 [550/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:46.386 [551/740] Linking static target lib/librte_port.a 00:01:46.386 [552/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:46.386 [553/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.386 [554/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:46.386 [555/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.386 [556/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.386 [557/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.386 [558/740] Linking static target drivers/librte_mempool_ring.a 00:01:46.386 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:46.386 [560/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:46.386 [561/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.386 [562/740] Linking static target lib/librte_hash.a 00:01:46.386 [563/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:46.386 [564/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.386 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:46.386 [566/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:46.386 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:46.645 [568/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.645 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:46.645 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:46.645 [571/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.645 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:46.645 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:46.645 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:46.645 [575/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:46.645 [576/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:46.645 [577/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:46.645 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:46.645 [579/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:46.645 [580/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:46.645 [581/740] Linking static target lib/librte_eventdev.a 00:01:46.645 [582/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:46.645 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:46.645 [584/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:46.646 [585/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.646 [586/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:46.646 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:46.646 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:46.904 [589/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:46.904 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:46.904 [591/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:46.904 [592/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:46.904 [593/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:46.904 [594/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:46.904 [595/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:46.904 [596/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:46.904 [597/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:46.904 [598/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:46.904 [599/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:46.904 [600/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:47.163 [601/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:47.163 [602/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:47.163 [603/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.163 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:47.163 [605/740] Linking static target lib/librte_acl.a 00:01:47.163 [606/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:47.163 [607/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:47.163 [608/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:47.423 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:47.423 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:47.423 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:47.423 [612/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.423 [613/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.423 [614/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:47.423 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:47.988 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:47.988 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:48.246 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:48.504 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.763 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:49.021 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:49.021 [622/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.588 [623/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.588 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:49.588 [625/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:49.847 [626/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.105 [627/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.105 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.105 [629/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.105 [630/740] Linking static target drivers/librte_net_i40e.a 00:01:50.364 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:50.622 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:50.881 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.168 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.105 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.105 [636/740] Linking target lib/librte_eal.so.23.0 00:01:55.363 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:55.363 [638/740] Linking target lib/librte_ring.so.23.0 00:01:55.363 [639/740] Linking target lib/librte_rawdev.so.23.0 00:01:55.363 [640/740] Linking target lib/librte_pci.so.23.0 00:01:55.363 [641/740] Linking target lib/librte_jobstats.so.23.0 00:01:55.363 [642/740] Linking target lib/librte_graph.so.23.0 00:01:55.363 [643/740] Linking target lib/librte_meter.so.23.0 00:01:55.363 [644/740] Linking target lib/librte_cfgfile.so.23.0 00:01:55.363 [645/740] Linking target lib/librte_timer.so.23.0 00:01:55.363 [646/740] Linking target lib/librte_dmadev.so.23.0 00:01:55.363 [647/740] Linking target lib/librte_stack.so.23.0 00:01:55.363 [648/740] Linking target drivers/librte_bus_vdev.so.23.0 00:01:55.363 [649/740] Linking target lib/librte_acl.so.23.0 00:01:55.363 [650/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:55.622 [651/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:55.622 [652/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:55.622 [653/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:55.622 [654/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:55.622 [655/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:55.622 [656/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:55.622 [657/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:55.622 [658/740] Linking target lib/librte_rcu.so.23.0 00:01:55.622 [659/740] Linking target drivers/librte_bus_pci.so.23.0 00:01:55.622 [660/740] Linking target lib/librte_mempool.so.23.0 00:01:55.622 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:55.622 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:55.622 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:55.622 [664/740] Linking target lib/librte_rib.so.23.0 00:01:55.622 [665/740] Linking target lib/librte_mbuf.so.23.0 00:01:55.622 [666/740] Linking target drivers/librte_mempool_ring.so.23.0 00:01:55.881 [667/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:55.881 [668/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:55.881 [669/740] Linking target lib/librte_bbdev.so.23.0 00:01:55.881 [670/740] Linking target lib/librte_net.so.23.0 00:01:55.881 [671/740] Linking target lib/librte_gpudev.so.23.0 00:01:55.881 [672/740] Linking target lib/librte_compressdev.so.23.0 00:01:55.881 [673/740] Linking target lib/librte_regexdev.so.23.0 00:01:55.881 [674/740] Linking target lib/librte_distributor.so.23.0 00:01:55.881 [675/740] Linking target lib/librte_reorder.so.23.0 00:01:55.881 [676/740] Linking target lib/librte_cryptodev.so.23.0 00:01:55.881 [677/740] Linking target lib/librte_fib.so.23.0 00:01:55.881 [678/740] Linking target lib/librte_sched.so.23.0 00:01:56.139 [679/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:56.139 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:56.139 [681/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:56.140 [682/740] Linking target lib/librte_hash.so.23.0 00:01:56.140 [683/740] Linking target lib/librte_cmdline.so.23.0 00:01:56.140 [684/740] Linking target lib/librte_security.so.23.0 00:01:56.140 [685/740] Linking target lib/librte_ethdev.so.23.0 00:01:56.140 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:56.140 [687/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:56.140 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:56.140 [689/740] Linking target lib/librte_efd.so.23.0 00:01:56.399 [690/740] Linking target lib/librte_lpm.so.23.0 00:01:56.399 [691/740] Linking target lib/librte_member.so.23.0 00:01:56.399 [692/740] Linking target lib/librte_ipsec.so.23.0 00:01:56.399 [693/740] Linking target lib/librte_metrics.so.23.0 00:01:56.399 [694/740] Linking target lib/librte_bpf.so.23.0 00:01:56.399 [695/740] Linking target lib/librte_gso.so.23.0 00:01:56.399 [696/740] Linking target lib/librte_pcapng.so.23.0 00:01:56.399 [697/740] Linking target lib/librte_gro.so.23.0 00:01:56.399 [698/740] Linking target lib/librte_ip_frag.so.23.0 00:01:56.399 [699/740] Linking target lib/librte_power.so.23.0 00:01:56.399 [700/740] Linking target lib/librte_eventdev.so.23.0 00:01:56.399 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:01:56.399 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:56.399 [703/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:56.399 [704/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:56.399 [705/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:56.399 [706/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:56.399 [707/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:56.399 [708/740] Linking target lib/librte_node.so.23.0 00:01:56.399 [709/740] Linking target lib/librte_bitratestats.so.23.0 00:01:56.399 [710/740] Linking target lib/librte_latencystats.so.23.0 00:01:56.399 [711/740] Linking target lib/librte_pdump.so.23.0 00:01:56.399 [712/740] Linking target lib/librte_port.so.23.0 00:01:56.658 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:56.658 [714/740] Linking target lib/librte_table.so.23.0 00:01:56.658 [715/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.658 [716/740] Linking static target lib/librte_vhost.a 00:01:56.917 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:57.854 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:57.854 [719/740] Linking static target lib/librte_pipeline.a 00:01:58.123 [720/740] Linking target app/dpdk-test-gpudev 00:01:58.123 [721/740] Linking target app/dpdk-pdump 00:01:58.123 [722/740] Linking target app/dpdk-test-security-perf 00:01:58.123 [723/740] Linking target app/dpdk-test-pipeline 00:01:58.123 [724/740] Linking target app/dpdk-proc-info 00:01:58.123 [725/740] Linking target app/dpdk-test-acl 00:01:58.123 [726/740] Linking target app/dpdk-test-cmdline 00:01:58.123 [727/740] Linking target app/dpdk-test-eventdev 00:01:58.123 [728/740] Linking target app/dpdk-test-bbdev 00:01:58.123 [729/740] Linking target app/dpdk-test-sad 00:01:58.123 [730/740] Linking target app/dpdk-test-fib 00:01:58.123 [731/740] Linking target app/dpdk-dumpcap 00:01:58.123 [732/740] Linking target app/dpdk-test-regex 00:01:58.123 [733/740] Linking target app/dpdk-test-flow-perf 00:01:58.124 [734/740] Linking target app/dpdk-test-compress-perf 00:01:58.124 [735/740] Linking target app/dpdk-test-crypto-perf 00:01:58.124 [736/740] Linking target app/dpdk-testpmd 00:01:58.698 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.698 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:02.906 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.906 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:02.906 22:11:23 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:02.906 22:11:23 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:02.906 22:11:23 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:02.906 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:02.906 [0/1] Installing files. 00:02:02.906 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:02.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:02.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.912 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.912 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:02.913 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:02.913 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:02.913 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.913 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:02.913 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:02.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:02.917 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:02.917 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:02.917 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:02.917 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:02.917 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:02.917 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:02.917 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:02.917 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:02.917 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:02.917 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:02.917 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:02.917 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:02.917 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:02.917 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:02.917 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:02.917 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:02.917 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:02.917 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:02.917 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:02.917 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:02.917 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:02.917 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:02.917 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:02.917 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:02.917 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:02.917 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:02.917 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:02.917 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:02.917 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:02.917 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:02.917 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:02.917 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:02.917 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:02.917 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:02.917 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:02.917 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:02.917 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:02.917 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:02.917 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:02.917 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:02.918 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:02.918 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:02.918 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:02.918 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:02.918 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:02.918 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:02.918 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:02.918 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:02.918 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:02.918 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:02.918 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:02.918 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:02.918 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:02.918 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:02.918 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:02.918 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:02.918 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:02.918 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:02.918 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:02.918 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:02.918 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:02.918 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:02.918 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:02.918 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:02.918 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:02.918 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:02.918 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:02.918 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:02.918 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:02.918 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:02.918 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:02.918 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:02.918 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:02.918 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:02.918 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:02.918 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:02.918 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:02.918 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:02.918 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:02.918 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:02.918 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:02.918 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:02.918 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:02.918 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:02.918 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:02.918 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:02.918 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:02.918 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:02.918 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:02.918 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:02.918 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:02.918 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:02.918 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:02.918 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:02.918 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:02.918 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:02.918 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:02.918 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:02.918 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:02.918 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:02.918 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:02.918 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:02.918 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:02.918 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:02.918 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:02.918 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:02.918 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:02.918 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:02.918 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:02.918 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:02.918 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:02.918 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:02.918 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:02.918 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:02.918 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:02.918 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:02.918 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:02.918 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:02.918 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:02.918 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:02.918 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:02.918 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:02.918 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:02.918 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:02.918 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:02.918 22:11:23 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:02.918 22:11:23 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.918 00:02:02.918 real 0m28.924s 00:02:02.918 user 7m42.413s 00:02:02.918 sys 2m0.529s 00:02:02.918 22:11:23 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:02.918 22:11:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:02.918 ************************************ 00:02:02.918 END TEST build_native_dpdk 00:02:02.918 ************************************ 00:02:02.918 22:11:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.918 22:11:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.918 22:11:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.918 22:11:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.918 22:11:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.918 22:11:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.918 22:11:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.179 22:11:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:03.179 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:03.438 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.438 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.438 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:03.697 Using 'verbs' RDMA provider 00:02:16.855 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:29.077 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:29.336 Creating mk/config.mk...done. 00:02:29.336 Creating mk/cc.flags.mk...done. 00:02:29.336 Type 'make' to build. 00:02:29.336 22:11:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:29.336 22:11:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.336 22:11:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.336 22:11:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.336 ************************************ 00:02:29.336 START TEST make 00:02:29.336 ************************************ 00:02:29.336 22:11:50 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:31.246 The Meson build system 00:02:31.246 Version: 1.5.0 00:02:31.246 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:31.246 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.246 Build type: native build 00:02:31.246 Project name: libvfio-user 00:02:31.246 Project version: 0.0.1 00:02:31.246 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:31.246 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:31.246 Host machine cpu family: x86_64 00:02:31.246 Host machine cpu: x86_64 00:02:31.246 Run-time dependency threads found: YES 00:02:31.246 Library dl found: YES 00:02:31.246 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.246 Run-time dependency json-c found: YES 0.17 00:02:31.246 Run-time dependency cmocka found: YES 1.1.7 00:02:31.246 Program pytest-3 found: NO 00:02:31.246 Program flake8 found: NO 00:02:31.246 Program misspell-fixer found: NO 00:02:31.246 Program restructuredtext-lint found: NO 00:02:31.246 Program valgrind found: YES (/usr/bin/valgrind) 00:02:31.246 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.246 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.246 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.246 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.246 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:31.246 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:31.246 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.246 Build targets in project: 8 00:02:31.246 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:31.246 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:31.246 00:02:31.246 libvfio-user 0.0.1 00:02:31.246 00:02:31.246 User defined options 00:02:31.246 buildtype : debug 00:02:31.246 default_library: shared 00:02:31.246 libdir : /usr/local/lib 00:02:31.246 00:02:31.246 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.180 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.180 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:32.180 [2/37] Compiling C object samples/null.p/null.c.o 00:02:32.180 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:32.180 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:32.180 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:32.180 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:32.180 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:32.180 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:32.180 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:32.180 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:32.180 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:32.180 [12/37] Compiling C object samples/server.p/server.c.o 00:02:32.180 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:32.180 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:32.180 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:32.180 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:32.180 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:32.180 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:32.180 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:32.180 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:32.180 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:32.181 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:32.181 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:32.181 [24/37] Compiling C object samples/client.p/client.c.o 00:02:32.181 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:32.181 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:32.181 [27/37] Linking target samples/client 00:02:32.181 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:32.440 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:32.440 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:32.440 [31/37] Linking target test/unit_tests 00:02:32.440 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:32.440 [33/37] Linking target samples/lspci 00:02:32.440 [34/37] Linking target samples/null 00:02:32.440 [35/37] Linking target samples/gpio-pci-idio-16 00:02:32.440 [36/37] Linking target samples/server 00:02:32.440 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:32.440 INFO: autodetecting backend as ninja 00:02:32.440 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.699 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.958 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.958 ninja: no work to do. 00:02:59.524 CC lib/ut_mock/mock.o 00:02:59.524 CC lib/ut/ut.o 00:02:59.524 CC lib/log/log.o 00:02:59.524 CC lib/log/log_flags.o 00:02:59.524 CC lib/log/log_deprecated.o 00:02:59.524 LIB libspdk_log.a 00:02:59.524 LIB libspdk_ut_mock.a 00:02:59.524 LIB libspdk_ut.a 00:02:59.524 SO libspdk_ut_mock.so.6.0 00:02:59.524 SO libspdk_ut.so.2.0 00:02:59.524 SO libspdk_log.so.7.1 00:02:59.524 SYMLINK libspdk_ut_mock.so 00:02:59.524 SYMLINK libspdk_log.so 00:02:59.524 SYMLINK libspdk_ut.so 00:02:59.524 CXX lib/trace_parser/trace.o 00:02:59.524 CC lib/util/base64.o 00:02:59.524 CC lib/util/bit_array.o 00:02:59.524 CC lib/util/cpuset.o 00:02:59.524 CC lib/ioat/ioat.o 00:02:59.524 CC lib/util/crc16.o 00:02:59.524 CC lib/util/crc32c.o 00:02:59.524 CC lib/util/crc32.o 00:02:59.524 CC lib/util/crc32_ieee.o 00:02:59.524 CC lib/util/crc64.o 00:02:59.524 CC lib/util/dif.o 00:02:59.524 CC lib/util/fd.o 00:02:59.524 CC lib/util/fd_group.o 00:02:59.524 CC lib/util/file.o 00:02:59.524 CC lib/util/hexlify.o 00:02:59.524 CC lib/dma/dma.o 00:02:59.524 CC lib/util/iov.o 00:02:59.524 CC lib/util/math.o 00:02:59.524 CC lib/util/net.o 00:02:59.524 CC lib/util/strerror_tls.o 00:02:59.524 CC lib/util/pipe.o 00:02:59.524 CC lib/util/string.o 00:02:59.524 CC lib/util/uuid.o 00:02:59.524 CC lib/util/xor.o 00:02:59.524 CC lib/util/zipf.o 00:02:59.524 CC lib/util/md5.o 00:02:59.783 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.783 CC lib/vfio_user/host/vfio_user.o 00:02:59.783 LIB libspdk_dma.a 00:02:59.783 SO libspdk_dma.so.5.0 00:02:59.783 LIB libspdk_ioat.a 00:02:59.783 SO libspdk_ioat.so.7.0 00:02:59.783 SYMLINK libspdk_dma.so 00:02:59.783 SYMLINK libspdk_ioat.so 00:02:59.783 LIB libspdk_vfio_user.a 00:03:00.043 SO libspdk_vfio_user.so.5.0 00:03:00.043 SYMLINK libspdk_vfio_user.so 00:03:00.043 LIB libspdk_util.a 00:03:00.043 SO libspdk_util.so.10.1 00:03:00.043 SYMLINK libspdk_util.so 00:03:00.610 CC lib/conf/conf.o 00:03:00.610 CC lib/vmd/vmd.o 00:03:00.610 CC lib/rdma_utils/rdma_utils.o 00:03:00.610 CC lib/vmd/led.o 00:03:00.610 CC lib/idxd/idxd.o 00:03:00.610 CC lib/idxd/idxd_user.o 00:03:00.610 CC lib/idxd/idxd_kernel.o 00:03:00.610 CC lib/json/json_parse.o 00:03:00.610 CC lib/json/json_util.o 00:03:00.610 CC lib/json/json_write.o 00:03:00.610 CC lib/env_dpdk/env.o 00:03:00.610 CC lib/env_dpdk/memory.o 00:03:00.610 CC lib/env_dpdk/pci.o 00:03:00.610 CC lib/env_dpdk/init.o 00:03:00.610 CC lib/env_dpdk/threads.o 00:03:00.610 CC lib/env_dpdk/pci_ioat.o 00:03:00.610 CC lib/env_dpdk/pci_virtio.o 00:03:00.610 CC lib/env_dpdk/pci_vmd.o 00:03:00.610 CC lib/env_dpdk/pci_idxd.o 00:03:00.610 CC lib/env_dpdk/pci_event.o 00:03:00.610 CC lib/env_dpdk/sigbus_handler.o 00:03:00.610 CC lib/env_dpdk/pci_dpdk.o 00:03:00.610 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.610 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.868 LIB libspdk_conf.a 00:03:00.868 SO libspdk_conf.so.6.0 00:03:00.868 LIB libspdk_rdma_utils.a 00:03:00.868 LIB libspdk_json.a 00:03:00.868 SO libspdk_rdma_utils.so.1.0 00:03:00.868 SO libspdk_json.so.6.0 00:03:00.868 SYMLINK libspdk_conf.so 00:03:00.868 SYMLINK libspdk_rdma_utils.so 00:03:00.868 SYMLINK libspdk_json.so 00:03:01.127 LIB libspdk_idxd.a 00:03:01.127 LIB libspdk_vmd.a 00:03:01.127 SO libspdk_idxd.so.12.1 00:03:01.127 SO libspdk_vmd.so.6.0 00:03:01.127 LIB libspdk_trace_parser.a 00:03:01.127 SO libspdk_trace_parser.so.6.0 00:03:01.127 SYMLINK libspdk_idxd.so 00:03:01.127 SYMLINK libspdk_vmd.so 00:03:01.128 CC lib/rdma_provider/common.o 00:03:01.128 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.128 SYMLINK libspdk_trace_parser.so 00:03:01.128 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.128 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.128 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.128 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.386 LIB libspdk_rdma_provider.a 00:03:01.386 SO libspdk_rdma_provider.so.7.0 00:03:01.386 LIB libspdk_jsonrpc.a 00:03:01.386 SO libspdk_jsonrpc.so.6.0 00:03:01.386 SYMLINK libspdk_rdma_provider.so 00:03:01.646 SYMLINK libspdk_jsonrpc.so 00:03:01.646 LIB libspdk_env_dpdk.a 00:03:01.646 SO libspdk_env_dpdk.so.15.1 00:03:01.646 SYMLINK libspdk_env_dpdk.so 00:03:01.905 CC lib/rpc/rpc.o 00:03:02.164 LIB libspdk_rpc.a 00:03:02.164 SO libspdk_rpc.so.6.0 00:03:02.164 SYMLINK libspdk_rpc.so 00:03:02.424 CC lib/trace/trace.o 00:03:02.424 CC lib/trace/trace_flags.o 00:03:02.424 CC lib/trace/trace_rpc.o 00:03:02.424 CC lib/notify/notify.o 00:03:02.424 CC lib/notify/notify_rpc.o 00:03:02.424 CC lib/keyring/keyring.o 00:03:02.424 CC lib/keyring/keyring_rpc.o 00:03:02.684 LIB libspdk_notify.a 00:03:02.685 SO libspdk_notify.so.6.0 00:03:02.685 LIB libspdk_trace.a 00:03:02.685 LIB libspdk_keyring.a 00:03:02.685 SO libspdk_keyring.so.2.0 00:03:02.685 SO libspdk_trace.so.11.0 00:03:02.685 SYMLINK libspdk_notify.so 00:03:02.685 SYMLINK libspdk_keyring.so 00:03:02.945 SYMLINK libspdk_trace.so 00:03:03.206 CC lib/thread/thread.o 00:03:03.206 CC lib/thread/iobuf.o 00:03:03.206 CC lib/sock/sock.o 00:03:03.206 CC lib/sock/sock_rpc.o 00:03:03.465 LIB libspdk_sock.a 00:03:03.465 SO libspdk_sock.so.10.0 00:03:03.725 SYMLINK libspdk_sock.so 00:03:03.984 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.984 CC lib/nvme/nvme_ctrlr.o 00:03:03.984 CC lib/nvme/nvme_fabric.o 00:03:03.984 CC lib/nvme/nvme_ns_cmd.o 00:03:03.984 CC lib/nvme/nvme_ns.o 00:03:03.984 CC lib/nvme/nvme_pcie_common.o 00:03:03.984 CC lib/nvme/nvme_pcie.o 00:03:03.984 CC lib/nvme/nvme_qpair.o 00:03:03.984 CC lib/nvme/nvme.o 00:03:03.984 CC lib/nvme/nvme_quirks.o 00:03:03.984 CC lib/nvme/nvme_transport.o 00:03:03.984 CC lib/nvme/nvme_discovery.o 00:03:03.984 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.984 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.984 CC lib/nvme/nvme_tcp.o 00:03:03.984 CC lib/nvme/nvme_opal.o 00:03:03.984 CC lib/nvme/nvme_io_msg.o 00:03:03.984 CC lib/nvme/nvme_poll_group.o 00:03:03.984 CC lib/nvme/nvme_zns.o 00:03:03.984 CC lib/nvme/nvme_stubs.o 00:03:03.984 CC lib/nvme/nvme_auth.o 00:03:03.984 CC lib/nvme/nvme_cuse.o 00:03:03.984 CC lib/nvme/nvme_vfio_user.o 00:03:03.984 CC lib/nvme/nvme_rdma.o 00:03:04.242 LIB libspdk_thread.a 00:03:04.242 SO libspdk_thread.so.11.0 00:03:04.500 SYMLINK libspdk_thread.so 00:03:04.759 CC lib/blob/zeroes.o 00:03:04.759 CC lib/blob/blobstore.o 00:03:04.759 CC lib/blob/request.o 00:03:04.759 CC lib/blob/blob_bs_dev.o 00:03:04.759 CC lib/init/json_config.o 00:03:04.759 CC lib/init/subsystem.o 00:03:04.759 CC lib/init/subsystem_rpc.o 00:03:04.759 CC lib/accel/accel.o 00:03:04.759 CC lib/init/rpc.o 00:03:04.759 CC lib/accel/accel_rpc.o 00:03:04.759 CC lib/accel/accel_sw.o 00:03:04.759 CC lib/fsdev/fsdev.o 00:03:04.759 CC lib/fsdev/fsdev_io.o 00:03:04.759 CC lib/fsdev/fsdev_rpc.o 00:03:04.759 CC lib/vfu_tgt/tgt_endpoint.o 00:03:04.759 CC lib/vfu_tgt/tgt_rpc.o 00:03:04.759 CC lib/virtio/virtio.o 00:03:04.759 CC lib/virtio/virtio_vhost_user.o 00:03:04.759 CC lib/virtio/virtio_vfio_user.o 00:03:04.759 CC lib/virtio/virtio_pci.o 00:03:05.018 LIB libspdk_init.a 00:03:05.018 SO libspdk_init.so.6.0 00:03:05.018 LIB libspdk_vfu_tgt.a 00:03:05.018 SYMLINK libspdk_init.so 00:03:05.018 LIB libspdk_virtio.a 00:03:05.018 SO libspdk_vfu_tgt.so.3.0 00:03:05.018 SO libspdk_virtio.so.7.0 00:03:05.018 SYMLINK libspdk_vfu_tgt.so 00:03:05.278 SYMLINK libspdk_virtio.so 00:03:05.278 LIB libspdk_fsdev.a 00:03:05.278 SO libspdk_fsdev.so.2.0 00:03:05.278 CC lib/event/app.o 00:03:05.278 SYMLINK libspdk_fsdev.so 00:03:05.278 CC lib/event/reactor.o 00:03:05.278 CC lib/event/log_rpc.o 00:03:05.278 CC lib/event/app_rpc.o 00:03:05.278 CC lib/event/scheduler_static.o 00:03:05.538 LIB libspdk_accel.a 00:03:05.538 SO libspdk_accel.so.16.0 00:03:05.538 SYMLINK libspdk_accel.so 00:03:05.798 LIB libspdk_nvme.a 00:03:05.798 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:05.798 LIB libspdk_event.a 00:03:05.798 SO libspdk_event.so.14.0 00:03:05.798 SO libspdk_nvme.so.15.0 00:03:05.798 SYMLINK libspdk_event.so 00:03:06.057 SYMLINK libspdk_nvme.so 00:03:06.057 CC lib/bdev/bdev_rpc.o 00:03:06.058 CC lib/bdev/bdev.o 00:03:06.058 CC lib/bdev/bdev_zone.o 00:03:06.058 CC lib/bdev/part.o 00:03:06.058 CC lib/bdev/scsi_nvme.o 00:03:06.058 LIB libspdk_fuse_dispatcher.a 00:03:06.317 SO libspdk_fuse_dispatcher.so.1.0 00:03:06.317 SYMLINK libspdk_fuse_dispatcher.so 00:03:06.885 LIB libspdk_blob.a 00:03:06.885 SO libspdk_blob.so.12.0 00:03:06.885 SYMLINK libspdk_blob.so 00:03:07.453 CC lib/lvol/lvol.o 00:03:07.453 CC lib/blobfs/blobfs.o 00:03:07.453 CC lib/blobfs/tree.o 00:03:08.022 LIB libspdk_bdev.a 00:03:08.022 SO libspdk_bdev.so.17.0 00:03:08.022 LIB libspdk_blobfs.a 00:03:08.022 SO libspdk_blobfs.so.11.0 00:03:08.022 LIB libspdk_lvol.a 00:03:08.022 SYMLINK libspdk_bdev.so 00:03:08.022 SO libspdk_lvol.so.11.0 00:03:08.022 SYMLINK libspdk_blobfs.so 00:03:08.022 SYMLINK libspdk_lvol.so 00:03:08.283 CC lib/scsi/dev.o 00:03:08.283 CC lib/scsi/lun.o 00:03:08.283 CC lib/scsi/port.o 00:03:08.283 CC lib/scsi/scsi.o 00:03:08.283 CC lib/ublk/ublk.o 00:03:08.283 CC lib/scsi/scsi_bdev.o 00:03:08.283 CC lib/ublk/ublk_rpc.o 00:03:08.283 CC lib/scsi/scsi_pr.o 00:03:08.283 CC lib/scsi/scsi_rpc.o 00:03:08.283 CC lib/nvmf/ctrlr.o 00:03:08.283 CC lib/scsi/task.o 00:03:08.283 CC lib/nvmf/ctrlr_discovery.o 00:03:08.283 CC lib/nvmf/ctrlr_bdev.o 00:03:08.283 CC lib/nvmf/subsystem.o 00:03:08.283 CC lib/nbd/nbd.o 00:03:08.283 CC lib/nvmf/nvmf.o 00:03:08.283 CC lib/nbd/nbd_rpc.o 00:03:08.283 CC lib/ftl/ftl_core.o 00:03:08.283 CC lib/nvmf/nvmf_rpc.o 00:03:08.283 CC lib/ftl/ftl_init.o 00:03:08.283 CC lib/nvmf/transport.o 00:03:08.283 CC lib/ftl/ftl_layout.o 00:03:08.283 CC lib/nvmf/tcp.o 00:03:08.283 CC lib/nvmf/stubs.o 00:03:08.283 CC lib/ftl/ftl_debug.o 00:03:08.283 CC lib/ftl/ftl_io.o 00:03:08.283 CC lib/nvmf/mdns_server.o 00:03:08.283 CC lib/ftl/ftl_sb.o 00:03:08.283 CC lib/ftl/ftl_l2p.o 00:03:08.283 CC lib/nvmf/vfio_user.o 00:03:08.283 CC lib/nvmf/rdma.o 00:03:08.283 CC lib/nvmf/auth.o 00:03:08.283 CC lib/ftl/ftl_nv_cache.o 00:03:08.283 CC lib/ftl/ftl_l2p_flat.o 00:03:08.283 CC lib/ftl/ftl_band.o 00:03:08.283 CC lib/ftl/ftl_band_ops.o 00:03:08.283 CC lib/ftl/ftl_writer.o 00:03:08.283 CC lib/ftl/ftl_rq.o 00:03:08.283 CC lib/ftl/ftl_reloc.o 00:03:08.283 CC lib/ftl/ftl_l2p_cache.o 00:03:08.283 CC lib/ftl/ftl_p2l.o 00:03:08.283 CC lib/ftl/ftl_p2l_log.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.283 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.544 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.544 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.544 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.544 CC lib/ftl/utils/ftl_conf.o 00:03:08.544 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.544 CC lib/ftl/utils/ftl_md.o 00:03:08.544 CC lib/ftl/utils/ftl_mempool.o 00:03:08.544 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.544 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.544 CC lib/ftl/utils/ftl_property.o 00:03:08.544 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.544 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.544 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.544 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.544 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.544 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.544 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.544 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.544 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.544 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.544 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.544 CC lib/ftl/base/ftl_base_dev.o 00:03:08.544 CC lib/ftl/ftl_trace.o 00:03:08.544 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.544 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.114 LIB libspdk_scsi.a 00:03:09.114 SO libspdk_scsi.so.9.0 00:03:09.114 LIB libspdk_ublk.a 00:03:09.114 LIB libspdk_nbd.a 00:03:09.114 SO libspdk_ublk.so.3.0 00:03:09.114 SO libspdk_nbd.so.7.0 00:03:09.114 SYMLINK libspdk_scsi.so 00:03:09.114 SYMLINK libspdk_ublk.so 00:03:09.114 SYMLINK libspdk_nbd.so 00:03:09.373 LIB libspdk_ftl.a 00:03:09.632 CC lib/vhost/vhost.o 00:03:09.632 CC lib/vhost/vhost_rpc.o 00:03:09.632 CC lib/vhost/vhost_scsi.o 00:03:09.632 CC lib/vhost/vhost_blk.o 00:03:09.632 CC lib/vhost/rte_vhost_user.o 00:03:09.632 CC lib/iscsi/conn.o 00:03:09.632 CC lib/iscsi/init_grp.o 00:03:09.632 CC lib/iscsi/iscsi.o 00:03:09.632 CC lib/iscsi/param.o 00:03:09.632 CC lib/iscsi/portal_grp.o 00:03:09.632 CC lib/iscsi/tgt_node.o 00:03:09.632 CC lib/iscsi/iscsi_subsystem.o 00:03:09.632 CC lib/iscsi/iscsi_rpc.o 00:03:09.632 CC lib/iscsi/task.o 00:03:09.632 SO libspdk_ftl.so.9.0 00:03:09.891 SYMLINK libspdk_ftl.so 00:03:10.461 LIB libspdk_nvmf.a 00:03:10.461 LIB libspdk_vhost.a 00:03:10.461 SO libspdk_nvmf.so.20.0 00:03:10.461 SO libspdk_vhost.so.8.0 00:03:10.461 SYMLINK libspdk_vhost.so 00:03:10.461 SYMLINK libspdk_nvmf.so 00:03:10.461 LIB libspdk_iscsi.a 00:03:10.720 SO libspdk_iscsi.so.8.0 00:03:10.720 SYMLINK libspdk_iscsi.so 00:03:11.289 CC module/vfu_device/vfu_virtio.o 00:03:11.289 CC module/vfu_device/vfu_virtio_scsi.o 00:03:11.289 CC module/vfu_device/vfu_virtio_blk.o 00:03:11.289 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.289 CC module/vfu_device/vfu_virtio_rpc.o 00:03:11.289 CC module/vfu_device/vfu_virtio_fs.o 00:03:11.548 CC module/accel/iaa/accel_iaa.o 00:03:11.548 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.548 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:11.548 LIB libspdk_env_dpdk_rpc.a 00:03:11.548 CC module/accel/error/accel_error.o 00:03:11.548 CC module/accel/error/accel_error_rpc.o 00:03:11.548 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.548 CC module/accel/ioat/accel_ioat.o 00:03:11.548 CC module/keyring/linux/keyring.o 00:03:11.548 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.548 CC module/fsdev/aio/fsdev_aio.o 00:03:11.548 CC module/keyring/linux/keyring_rpc.o 00:03:11.548 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:11.548 CC module/fsdev/aio/linux_aio_mgr.o 00:03:11.548 CC module/sock/posix/posix.o 00:03:11.548 CC module/keyring/file/keyring.o 00:03:11.548 CC module/accel/dsa/accel_dsa.o 00:03:11.548 CC module/keyring/file/keyring_rpc.o 00:03:11.548 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.548 CC module/blob/bdev/blob_bdev.o 00:03:11.548 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.548 SO libspdk_env_dpdk_rpc.so.6.0 00:03:11.548 SYMLINK libspdk_env_dpdk_rpc.so 00:03:11.548 LIB libspdk_keyring_linux.a 00:03:11.548 LIB libspdk_keyring_file.a 00:03:11.548 LIB libspdk_scheduler_gscheduler.a 00:03:11.548 LIB libspdk_scheduler_dpdk_governor.a 00:03:11.548 SO libspdk_keyring_linux.so.1.0 00:03:11.808 SO libspdk_keyring_file.so.2.0 00:03:11.808 LIB libspdk_accel_iaa.a 00:03:11.808 LIB libspdk_accel_ioat.a 00:03:11.808 LIB libspdk_scheduler_dynamic.a 00:03:11.808 LIB libspdk_accel_error.a 00:03:11.808 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.808 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:11.808 SO libspdk_accel_ioat.so.6.0 00:03:11.808 SO libspdk_accel_iaa.so.3.0 00:03:11.808 SYMLINK libspdk_keyring_linux.so 00:03:11.808 SO libspdk_scheduler_dynamic.so.4.0 00:03:11.808 SO libspdk_accel_error.so.2.0 00:03:11.808 SYMLINK libspdk_keyring_file.so 00:03:11.808 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:11.808 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.808 LIB libspdk_blob_bdev.a 00:03:11.808 SYMLINK libspdk_accel_ioat.so 00:03:11.808 LIB libspdk_accel_dsa.a 00:03:11.808 SYMLINK libspdk_accel_iaa.so 00:03:11.808 SO libspdk_blob_bdev.so.12.0 00:03:11.808 SYMLINK libspdk_scheduler_dynamic.so 00:03:11.808 SYMLINK libspdk_accel_error.so 00:03:11.808 SO libspdk_accel_dsa.so.5.0 00:03:11.808 SYMLINK libspdk_blob_bdev.so 00:03:11.808 LIB libspdk_vfu_device.a 00:03:11.808 SYMLINK libspdk_accel_dsa.so 00:03:11.808 SO libspdk_vfu_device.so.3.0 00:03:12.067 SYMLINK libspdk_vfu_device.so 00:03:12.067 LIB libspdk_fsdev_aio.a 00:03:12.067 LIB libspdk_sock_posix.a 00:03:12.067 SO libspdk_fsdev_aio.so.1.0 00:03:12.067 SO libspdk_sock_posix.so.6.0 00:03:12.067 SYMLINK libspdk_fsdev_aio.so 00:03:12.327 SYMLINK libspdk_sock_posix.so 00:03:12.327 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.327 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.327 CC module/bdev/gpt/gpt.o 00:03:12.327 CC module/bdev/delay/vbdev_delay.o 00:03:12.327 CC module/bdev/error/vbdev_error.o 00:03:12.327 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.327 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.327 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.327 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.327 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.327 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.327 CC module/bdev/aio/bdev_aio.o 00:03:12.327 CC module/bdev/malloc/bdev_malloc.o 00:03:12.327 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.327 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.327 CC module/bdev/nvme/bdev_nvme.o 00:03:12.327 CC module/bdev/null/bdev_null.o 00:03:12.327 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.327 CC module/bdev/null/bdev_null_rpc.o 00:03:12.327 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.327 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.327 CC module/bdev/nvme/nvme_rpc.o 00:03:12.327 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.327 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.327 CC module/bdev/nvme/vbdev_opal.o 00:03:12.327 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.327 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.327 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.327 CC module/bdev/ftl/bdev_ftl.o 00:03:12.327 CC module/bdev/raid/bdev_raid.o 00:03:12.327 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.327 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.327 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.327 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.327 CC module/bdev/raid/raid0.o 00:03:12.327 CC module/bdev/raid/raid1.o 00:03:12.327 CC module/bdev/raid/concat.o 00:03:12.327 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.327 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.327 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.327 CC module/bdev/split/vbdev_split.o 00:03:12.327 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.586 LIB libspdk_bdev_error.a 00:03:12.586 LIB libspdk_blobfs_bdev.a 00:03:12.586 LIB libspdk_bdev_gpt.a 00:03:12.586 LIB libspdk_bdev_null.a 00:03:12.586 SO libspdk_blobfs_bdev.so.6.0 00:03:12.844 SO libspdk_bdev_error.so.6.0 00:03:12.844 SO libspdk_bdev_gpt.so.6.0 00:03:12.844 SO libspdk_bdev_null.so.6.0 00:03:12.844 LIB libspdk_bdev_split.a 00:03:12.844 LIB libspdk_bdev_passthru.a 00:03:12.844 LIB libspdk_bdev_aio.a 00:03:12.844 LIB libspdk_bdev_zone_block.a 00:03:12.844 SO libspdk_bdev_split.so.6.0 00:03:12.844 LIB libspdk_bdev_ftl.a 00:03:12.844 SYMLINK libspdk_blobfs_bdev.so 00:03:12.844 LIB libspdk_bdev_malloc.a 00:03:12.844 SYMLINK libspdk_bdev_error.so 00:03:12.844 LIB libspdk_bdev_iscsi.a 00:03:12.844 SO libspdk_bdev_zone_block.so.6.0 00:03:12.844 SYMLINK libspdk_bdev_gpt.so 00:03:12.844 SO libspdk_bdev_passthru.so.6.0 00:03:12.844 SO libspdk_bdev_aio.so.6.0 00:03:12.844 SYMLINK libspdk_bdev_null.so 00:03:12.844 LIB libspdk_bdev_delay.a 00:03:12.844 SO libspdk_bdev_malloc.so.6.0 00:03:12.844 SO libspdk_bdev_ftl.so.6.0 00:03:12.844 SO libspdk_bdev_iscsi.so.6.0 00:03:12.844 SYMLINK libspdk_bdev_split.so 00:03:12.844 SO libspdk_bdev_delay.so.6.0 00:03:12.844 SYMLINK libspdk_bdev_zone_block.so 00:03:12.844 SYMLINK libspdk_bdev_passthru.so 00:03:12.844 SYMLINK libspdk_bdev_aio.so 00:03:12.844 SYMLINK libspdk_bdev_ftl.so 00:03:12.844 SYMLINK libspdk_bdev_iscsi.so 00:03:12.844 SYMLINK libspdk_bdev_malloc.so 00:03:12.844 SYMLINK libspdk_bdev_delay.so 00:03:12.844 LIB libspdk_bdev_lvol.a 00:03:12.844 SO libspdk_bdev_lvol.so.6.0 00:03:12.844 LIB libspdk_bdev_virtio.a 00:03:13.104 SO libspdk_bdev_virtio.so.6.0 00:03:13.104 SYMLINK libspdk_bdev_lvol.so 00:03:13.104 SYMLINK libspdk_bdev_virtio.so 00:03:13.362 LIB libspdk_bdev_raid.a 00:03:13.362 SO libspdk_bdev_raid.so.6.0 00:03:13.362 SYMLINK libspdk_bdev_raid.so 00:03:14.301 LIB libspdk_bdev_nvme.a 00:03:14.301 SO libspdk_bdev_nvme.so.7.1 00:03:14.301 SYMLINK libspdk_bdev_nvme.so 00:03:15.238 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.238 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.238 CC module/event/subsystems/sock/sock.o 00:03:15.238 CC module/event/subsystems/keyring/keyring.o 00:03:15.238 CC module/event/subsystems/vmd/vmd.o 00:03:15.238 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.238 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.238 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.238 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.238 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.238 LIB libspdk_event_vfu_tgt.a 00:03:15.238 LIB libspdk_event_sock.a 00:03:15.238 LIB libspdk_event_fsdev.a 00:03:15.238 LIB libspdk_event_scheduler.a 00:03:15.238 LIB libspdk_event_keyring.a 00:03:15.238 LIB libspdk_event_iobuf.a 00:03:15.238 LIB libspdk_event_vhost_blk.a 00:03:15.238 LIB libspdk_event_vmd.a 00:03:15.238 SO libspdk_event_scheduler.so.4.0 00:03:15.238 SO libspdk_event_sock.so.5.0 00:03:15.238 SO libspdk_event_vfu_tgt.so.3.0 00:03:15.238 SO libspdk_event_vhost_blk.so.3.0 00:03:15.238 SO libspdk_event_fsdev.so.1.0 00:03:15.238 SO libspdk_event_keyring.so.1.0 00:03:15.238 SO libspdk_event_iobuf.so.3.0 00:03:15.238 SO libspdk_event_vmd.so.6.0 00:03:15.238 SYMLINK libspdk_event_vfu_tgt.so 00:03:15.238 SYMLINK libspdk_event_scheduler.so 00:03:15.238 SYMLINK libspdk_event_vhost_blk.so 00:03:15.238 SYMLINK libspdk_event_sock.so 00:03:15.238 SYMLINK libspdk_event_keyring.so 00:03:15.238 SYMLINK libspdk_event_fsdev.so 00:03:15.238 SYMLINK libspdk_event_vmd.so 00:03:15.238 SYMLINK libspdk_event_iobuf.so 00:03:15.808 CC module/event/subsystems/accel/accel.o 00:03:15.808 LIB libspdk_event_accel.a 00:03:15.808 SO libspdk_event_accel.so.6.0 00:03:16.068 SYMLINK libspdk_event_accel.so 00:03:16.326 CC module/event/subsystems/bdev/bdev.o 00:03:16.587 LIB libspdk_event_bdev.a 00:03:16.587 SO libspdk_event_bdev.so.6.0 00:03:16.587 SYMLINK libspdk_event_bdev.so 00:03:16.847 CC module/event/subsystems/scsi/scsi.o 00:03:16.847 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.847 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.847 CC module/event/subsystems/ublk/ublk.o 00:03:16.847 CC module/event/subsystems/nbd/nbd.o 00:03:17.106 LIB libspdk_event_ublk.a 00:03:17.106 LIB libspdk_event_nbd.a 00:03:17.106 LIB libspdk_event_scsi.a 00:03:17.106 SO libspdk_event_ublk.so.3.0 00:03:17.106 SO libspdk_event_nbd.so.6.0 00:03:17.106 SO libspdk_event_scsi.so.6.0 00:03:17.106 LIB libspdk_event_nvmf.a 00:03:17.106 SYMLINK libspdk_event_nbd.so 00:03:17.106 SYMLINK libspdk_event_ublk.so 00:03:17.106 SO libspdk_event_nvmf.so.6.0 00:03:17.106 SYMLINK libspdk_event_scsi.so 00:03:17.366 SYMLINK libspdk_event_nvmf.so 00:03:17.626 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.626 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.626 LIB libspdk_event_vhost_scsi.a 00:03:17.626 LIB libspdk_event_iscsi.a 00:03:17.626 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.626 SO libspdk_event_iscsi.so.6.0 00:03:17.886 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.886 SYMLINK libspdk_event_iscsi.so 00:03:17.886 SO libspdk.so.6.0 00:03:17.886 SYMLINK libspdk.so 00:03:18.486 CC app/spdk_nvme_perf/perf.o 00:03:18.486 CXX app/trace/trace.o 00:03:18.486 CC app/spdk_top/spdk_top.o 00:03:18.486 CC app/trace_record/trace_record.o 00:03:18.486 CC app/spdk_nvme_identify/identify.o 00:03:18.486 CC app/spdk_lspci/spdk_lspci.o 00:03:18.486 CC test/rpc_client/rpc_client_test.o 00:03:18.486 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.486 TEST_HEADER include/spdk/accel.h 00:03:18.486 TEST_HEADER include/spdk/accel_module.h 00:03:18.486 TEST_HEADER include/spdk/assert.h 00:03:18.486 TEST_HEADER include/spdk/bdev.h 00:03:18.486 TEST_HEADER include/spdk/barrier.h 00:03:18.486 TEST_HEADER include/spdk/base64.h 00:03:18.486 TEST_HEADER include/spdk/bit_array.h 00:03:18.486 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.486 TEST_HEADER include/spdk/bdev_module.h 00:03:18.486 TEST_HEADER include/spdk/bit_pool.h 00:03:18.486 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.486 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.486 TEST_HEADER include/spdk/blob.h 00:03:18.486 TEST_HEADER include/spdk/blobfs.h 00:03:18.486 TEST_HEADER include/spdk/conf.h 00:03:18.486 TEST_HEADER include/spdk/config.h 00:03:18.486 TEST_HEADER include/spdk/cpuset.h 00:03:18.486 TEST_HEADER include/spdk/crc32.h 00:03:18.486 TEST_HEADER include/spdk/crc16.h 00:03:18.486 TEST_HEADER include/spdk/crc64.h 00:03:18.486 TEST_HEADER include/spdk/endian.h 00:03:18.486 TEST_HEADER include/spdk/dma.h 00:03:18.486 CC app/nvmf_tgt/nvmf_main.o 00:03:18.486 TEST_HEADER include/spdk/dif.h 00:03:18.486 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.486 TEST_HEADER include/spdk/env.h 00:03:18.486 TEST_HEADER include/spdk/fd.h 00:03:18.486 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.486 TEST_HEADER include/spdk/event.h 00:03:18.486 TEST_HEADER include/spdk/file.h 00:03:18.486 TEST_HEADER include/spdk/fsdev.h 00:03:18.486 TEST_HEADER include/spdk/fd_group.h 00:03:18.486 TEST_HEADER include/spdk/ftl.h 00:03:18.486 TEST_HEADER include/spdk/fsdev_module.h 00:03:18.486 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.486 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.486 TEST_HEADER include/spdk/histogram_data.h 00:03:18.486 TEST_HEADER include/spdk/hexlify.h 00:03:18.486 TEST_HEADER include/spdk/idxd.h 00:03:18.486 TEST_HEADER include/spdk/ioat.h 00:03:18.486 CC app/spdk_dd/spdk_dd.o 00:03:18.486 TEST_HEADER include/spdk/init.h 00:03:18.486 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.486 TEST_HEADER include/spdk/json.h 00:03:18.486 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.486 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.486 TEST_HEADER include/spdk/keyring.h 00:03:18.486 TEST_HEADER include/spdk/keyring_module.h 00:03:18.486 TEST_HEADER include/spdk/likely.h 00:03:18.486 TEST_HEADER include/spdk/log.h 00:03:18.486 TEST_HEADER include/spdk/lvol.h 00:03:18.486 TEST_HEADER include/spdk/md5.h 00:03:18.486 TEST_HEADER include/spdk/memory.h 00:03:18.486 TEST_HEADER include/spdk/mmio.h 00:03:18.486 TEST_HEADER include/spdk/nbd.h 00:03:18.486 TEST_HEADER include/spdk/net.h 00:03:18.486 TEST_HEADER include/spdk/notify.h 00:03:18.486 TEST_HEADER include/spdk/nvme.h 00:03:18.486 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.486 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.486 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.486 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.486 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.486 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.486 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.486 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.486 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.486 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.486 TEST_HEADER include/spdk/nvmf.h 00:03:18.486 TEST_HEADER include/spdk/opal_spec.h 00:03:18.486 TEST_HEADER include/spdk/opal.h 00:03:18.486 TEST_HEADER include/spdk/pci_ids.h 00:03:18.486 TEST_HEADER include/spdk/pipe.h 00:03:18.486 CC app/spdk_tgt/spdk_tgt.o 00:03:18.486 TEST_HEADER include/spdk/queue.h 00:03:18.486 TEST_HEADER include/spdk/rpc.h 00:03:18.486 TEST_HEADER include/spdk/reduce.h 00:03:18.486 TEST_HEADER include/spdk/scheduler.h 00:03:18.486 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.486 TEST_HEADER include/spdk/scsi.h 00:03:18.486 TEST_HEADER include/spdk/sock.h 00:03:18.486 TEST_HEADER include/spdk/stdinc.h 00:03:18.486 TEST_HEADER include/spdk/string.h 00:03:18.486 TEST_HEADER include/spdk/thread.h 00:03:18.486 TEST_HEADER include/spdk/trace.h 00:03:18.486 TEST_HEADER include/spdk/trace_parser.h 00:03:18.486 TEST_HEADER include/spdk/tree.h 00:03:18.486 TEST_HEADER include/spdk/ublk.h 00:03:18.486 TEST_HEADER include/spdk/util.h 00:03:18.486 TEST_HEADER include/spdk/uuid.h 00:03:18.486 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.486 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.486 TEST_HEADER include/spdk/vhost.h 00:03:18.486 TEST_HEADER include/spdk/version.h 00:03:18.486 TEST_HEADER include/spdk/zipf.h 00:03:18.486 TEST_HEADER include/spdk/vmd.h 00:03:18.486 TEST_HEADER include/spdk/xor.h 00:03:18.486 CXX test/cpp_headers/accel.o 00:03:18.486 CXX test/cpp_headers/accel_module.o 00:03:18.486 CXX test/cpp_headers/assert.o 00:03:18.486 CXX test/cpp_headers/barrier.o 00:03:18.486 CXX test/cpp_headers/bdev_module.o 00:03:18.486 CXX test/cpp_headers/base64.o 00:03:18.486 CXX test/cpp_headers/bdev.o 00:03:18.486 CXX test/cpp_headers/bit_array.o 00:03:18.486 CXX test/cpp_headers/bdev_zone.o 00:03:18.486 CXX test/cpp_headers/bit_pool.o 00:03:18.486 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.486 CXX test/cpp_headers/blob_bdev.o 00:03:18.486 CXX test/cpp_headers/blobfs.o 00:03:18.486 CXX test/cpp_headers/conf.o 00:03:18.486 CXX test/cpp_headers/cpuset.o 00:03:18.486 CXX test/cpp_headers/blob.o 00:03:18.486 CXX test/cpp_headers/crc16.o 00:03:18.486 CXX test/cpp_headers/config.o 00:03:18.486 CXX test/cpp_headers/crc32.o 00:03:18.486 CXX test/cpp_headers/crc64.o 00:03:18.486 CXX test/cpp_headers/dma.o 00:03:18.486 CXX test/cpp_headers/dif.o 00:03:18.486 CXX test/cpp_headers/endian.o 00:03:18.486 CXX test/cpp_headers/env_dpdk.o 00:03:18.486 CXX test/cpp_headers/fd_group.o 00:03:18.486 CXX test/cpp_headers/fd.o 00:03:18.486 CXX test/cpp_headers/env.o 00:03:18.486 CXX test/cpp_headers/event.o 00:03:18.486 CXX test/cpp_headers/fsdev.o 00:03:18.486 CXX test/cpp_headers/file.o 00:03:18.486 CXX test/cpp_headers/ftl.o 00:03:18.486 CXX test/cpp_headers/fsdev_module.o 00:03:18.486 CXX test/cpp_headers/hexlify.o 00:03:18.486 CXX test/cpp_headers/histogram_data.o 00:03:18.486 CXX test/cpp_headers/gpt_spec.o 00:03:18.486 CXX test/cpp_headers/idxd_spec.o 00:03:18.486 CXX test/cpp_headers/idxd.o 00:03:18.486 CXX test/cpp_headers/ioat.o 00:03:18.486 CXX test/cpp_headers/init.o 00:03:18.486 CXX test/cpp_headers/ioat_spec.o 00:03:18.486 CXX test/cpp_headers/jsonrpc.o 00:03:18.486 CXX test/cpp_headers/iscsi_spec.o 00:03:18.486 CXX test/cpp_headers/keyring_module.o 00:03:18.486 CXX test/cpp_headers/json.o 00:03:18.486 CXX test/cpp_headers/keyring.o 00:03:18.486 CXX test/cpp_headers/likely.o 00:03:18.486 CXX test/cpp_headers/log.o 00:03:18.486 CXX test/cpp_headers/md5.o 00:03:18.486 CXX test/cpp_headers/lvol.o 00:03:18.486 CXX test/cpp_headers/mmio.o 00:03:18.486 CXX test/cpp_headers/nbd.o 00:03:18.486 CXX test/cpp_headers/memory.o 00:03:18.486 CXX test/cpp_headers/net.o 00:03:18.486 CXX test/cpp_headers/notify.o 00:03:18.486 CXX test/cpp_headers/nvme.o 00:03:18.486 CXX test/cpp_headers/nvme_intel.o 00:03:18.486 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.486 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.486 CC examples/ioat/perf/perf.o 00:03:18.487 CXX test/cpp_headers/nvme_spec.o 00:03:18.487 CXX test/cpp_headers/nvme_zns.o 00:03:18.487 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.487 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.487 CXX test/cpp_headers/nvmf.o 00:03:18.487 CXX test/cpp_headers/nvmf_transport.o 00:03:18.487 CXX test/cpp_headers/nvmf_spec.o 00:03:18.487 CXX test/cpp_headers/opal.o 00:03:18.487 CC examples/util/zipf/zipf.o 00:03:18.487 CXX test/cpp_headers/opal_spec.o 00:03:18.487 CC test/thread/poller_perf/poller_perf.o 00:03:18.487 CC examples/ioat/verify/verify.o 00:03:18.768 CC app/fio/nvme/fio_plugin.o 00:03:18.768 CC test/env/vtophys/vtophys.o 00:03:18.768 CXX test/cpp_headers/pci_ids.o 00:03:18.768 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.768 CC test/env/memory/memory_ut.o 00:03:18.768 CC test/env/pci/pci_ut.o 00:03:18.768 CC test/app/jsoncat/jsoncat.o 00:03:18.768 CC test/dma/test_dma/test_dma.o 00:03:18.768 CC test/app/stub/stub.o 00:03:18.768 CC test/app/histogram_perf/histogram_perf.o 00:03:18.768 CC app/fio/bdev/fio_plugin.o 00:03:18.768 CC test/app/bdev_svc/bdev_svc.o 00:03:18.768 LINK spdk_lspci 00:03:18.768 LINK rpc_client_test 00:03:19.042 LINK nvmf_tgt 00:03:19.042 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.042 LINK spdk_tgt 00:03:19.042 LINK zipf 00:03:19.042 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.042 LINK spdk_nvme_discover 00:03:19.042 LINK interrupt_tgt 00:03:19.042 LINK poller_perf 00:03:19.042 LINK vtophys 00:03:19.042 CXX test/cpp_headers/pipe.o 00:03:19.042 CXX test/cpp_headers/queue.o 00:03:19.042 CXX test/cpp_headers/reduce.o 00:03:19.042 CXX test/cpp_headers/rpc.o 00:03:19.303 CXX test/cpp_headers/scheduler.o 00:03:19.303 CXX test/cpp_headers/scsi.o 00:03:19.303 CXX test/cpp_headers/scsi_spec.o 00:03:19.303 CXX test/cpp_headers/sock.o 00:03:19.303 CXX test/cpp_headers/stdinc.o 00:03:19.303 CXX test/cpp_headers/string.o 00:03:19.303 CXX test/cpp_headers/thread.o 00:03:19.303 CXX test/cpp_headers/trace.o 00:03:19.303 CXX test/cpp_headers/trace_parser.o 00:03:19.303 LINK histogram_perf 00:03:19.303 LINK ioat_perf 00:03:19.303 CXX test/cpp_headers/tree.o 00:03:19.303 CXX test/cpp_headers/ublk.o 00:03:19.303 CXX test/cpp_headers/util.o 00:03:19.303 CXX test/cpp_headers/uuid.o 00:03:19.303 CXX test/cpp_headers/version.o 00:03:19.303 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.303 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.303 CXX test/cpp_headers/vhost.o 00:03:19.303 CXX test/cpp_headers/vmd.o 00:03:19.303 CXX test/cpp_headers/xor.o 00:03:19.303 LINK spdk_trace_record 00:03:19.303 CXX test/cpp_headers/zipf.o 00:03:19.303 LINK iscsi_tgt 00:03:19.303 LINK jsoncat 00:03:19.303 LINK env_dpdk_post_init 00:03:19.303 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.303 LINK bdev_svc 00:03:19.303 LINK stub 00:03:19.303 LINK verify 00:03:19.562 LINK mem_callbacks 00:03:19.562 LINK spdk_dd 00:03:19.562 LINK spdk_trace 00:03:19.562 LINK pci_ut 00:03:19.562 CC examples/sock/hello_world/hello_sock.o 00:03:19.562 CC test/event/reactor/reactor.o 00:03:19.562 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.820 LINK spdk_nvme_identify 00:03:19.820 LINK test_dma 00:03:19.820 CC test/event/reactor_perf/reactor_perf.o 00:03:19.820 CC test/event/event_perf/event_perf.o 00:03:19.820 CC examples/idxd/perf/perf.o 00:03:19.820 CC examples/vmd/led/led.o 00:03:19.820 CC test/event/app_repeat/app_repeat.o 00:03:19.820 CC examples/thread/thread/thread_ex.o 00:03:19.820 CC test/event/scheduler/scheduler.o 00:03:19.820 LINK spdk_nvme_perf 00:03:19.820 LINK nvme_fuzz 00:03:19.820 LINK reactor 00:03:19.820 LINK spdk_bdev 00:03:19.820 LINK spdk_nvme 00:03:19.820 LINK lsvmd 00:03:19.820 LINK vhost_fuzz 00:03:19.820 LINK led 00:03:19.820 LINK reactor_perf 00:03:19.820 LINK event_perf 00:03:19.820 LINK app_repeat 00:03:19.820 LINK hello_sock 00:03:19.820 LINK memory_ut 00:03:19.820 LINK spdk_top 00:03:20.079 LINK scheduler 00:03:20.079 CC app/vhost/vhost.o 00:03:20.079 LINK thread 00:03:20.079 LINK idxd_perf 00:03:20.079 CC test/nvme/simple_copy/simple_copy.o 00:03:20.079 CC test/nvme/e2edp/nvme_dp.o 00:03:20.338 CC test/nvme/overhead/overhead.o 00:03:20.338 CC test/nvme/reset/reset.o 00:03:20.338 CC test/nvme/startup/startup.o 00:03:20.338 CC test/nvme/sgl/sgl.o 00:03:20.338 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.338 CC test/nvme/connect_stress/connect_stress.o 00:03:20.338 CC test/nvme/fdp/fdp.o 00:03:20.338 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.338 CC test/nvme/err_injection/err_injection.o 00:03:20.338 CC test/nvme/aer/aer.o 00:03:20.338 CC test/nvme/compliance/nvme_compliance.o 00:03:20.338 CC test/nvme/boot_partition/boot_partition.o 00:03:20.338 CC test/nvme/cuse/cuse.o 00:03:20.338 CC test/nvme/reserve/reserve.o 00:03:20.338 LINK vhost 00:03:20.338 CC test/accel/dif/dif.o 00:03:20.338 CC test/blobfs/mkfs/mkfs.o 00:03:20.338 CC examples/nvme/reconnect/reconnect.o 00:03:20.338 CC examples/nvme/arbitration/arbitration.o 00:03:20.338 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.338 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.338 CC examples/nvme/hotplug/hotplug.o 00:03:20.338 CC examples/nvme/abort/abort.o 00:03:20.338 CC test/lvol/esnap/esnap.o 00:03:20.338 CC examples/nvme/hello_world/hello_world.o 00:03:20.338 LINK boot_partition 00:03:20.338 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.338 LINK startup 00:03:20.338 LINK err_injection 00:03:20.598 LINK doorbell_aers 00:03:20.598 LINK connect_stress 00:03:20.598 LINK reserve 00:03:20.598 LINK simple_copy 00:03:20.598 LINK reset 00:03:20.598 LINK fused_ordering 00:03:20.598 LINK nvme_dp 00:03:20.598 LINK sgl 00:03:20.598 CC examples/accel/perf/accel_perf.o 00:03:20.598 LINK mkfs 00:03:20.598 LINK overhead 00:03:20.598 LINK aer 00:03:20.598 LINK fdp 00:03:20.598 CC examples/blob/hello_world/hello_blob.o 00:03:20.598 CC examples/blob/cli/blobcli.o 00:03:20.598 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.598 LINK nvme_compliance 00:03:20.598 LINK cmb_copy 00:03:20.598 LINK pmr_persistence 00:03:20.598 LINK hello_world 00:03:20.598 LINK hotplug 00:03:20.858 LINK reconnect 00:03:20.858 LINK arbitration 00:03:20.858 LINK abort 00:03:20.858 LINK hello_blob 00:03:20.858 LINK nvme_manage 00:03:20.858 LINK hello_fsdev 00:03:20.858 LINK iscsi_fuzz 00:03:20.858 LINK dif 00:03:20.858 LINK accel_perf 00:03:21.117 LINK blobcli 00:03:21.377 LINK cuse 00:03:21.377 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.377 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.377 CC test/bdev/bdevio/bdevio.o 00:03:21.636 LINK hello_bdev 00:03:21.896 LINK bdevio 00:03:22.156 LINK bdevperf 00:03:22.726 CC examples/nvmf/nvmf/nvmf.o 00:03:22.726 LINK nvmf 00:03:24.107 LINK esnap 00:03:24.372 00:03:24.372 real 0m54.859s 00:03:24.372 user 6m50.144s 00:03:24.372 sys 3m4.633s 00:03:24.372 22:12:45 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.372 22:12:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.372 ************************************ 00:03:24.372 END TEST make 00:03:24.372 ************************************ 00:03:24.372 22:12:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.372 22:12:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.372 22:12:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.372 22:12:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.372 22:12:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.372 22:12:45 -- pm/common@44 -- $ pid=7592 00:03:24.372 22:12:45 -- pm/common@50 -- $ kill -TERM 7592 00:03:24.372 22:12:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.372 22:12:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.372 22:12:45 -- pm/common@44 -- $ pid=7594 00:03:24.372 22:12:45 -- pm/common@50 -- $ kill -TERM 7594 00:03:24.372 22:12:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.372 22:12:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.372 22:12:45 -- pm/common@44 -- $ pid=7595 00:03:24.372 22:12:45 -- pm/common@50 -- $ kill -TERM 7595 00:03:24.372 22:12:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.372 22:12:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.372 22:12:45 -- pm/common@44 -- $ pid=7624 00:03:24.372 22:12:45 -- pm/common@50 -- $ sudo -E kill -TERM 7624 00:03:24.372 22:12:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.372 22:12:45 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:24.372 22:12:45 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.372 22:12:45 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.372 22:12:45 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:24.637 22:12:45 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:24.637 22:12:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.637 22:12:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.637 22:12:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.637 22:12:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.637 22:12:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.637 22:12:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.637 22:12:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.637 22:12:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.637 22:12:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.637 22:12:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.637 22:12:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.637 22:12:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.637 22:12:45 -- scripts/common.sh@345 -- # : 1 00:03:24.637 22:12:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.637 22:12:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.637 22:12:45 -- scripts/common.sh@365 -- # decimal 1 00:03:24.637 22:12:45 -- scripts/common.sh@353 -- # local d=1 00:03:24.637 22:12:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.637 22:12:45 -- scripts/common.sh@355 -- # echo 1 00:03:24.637 22:12:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.637 22:12:45 -- scripts/common.sh@366 -- # decimal 2 00:03:24.637 22:12:45 -- scripts/common.sh@353 -- # local d=2 00:03:24.637 22:12:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.637 22:12:45 -- scripts/common.sh@355 -- # echo 2 00:03:24.637 22:12:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.637 22:12:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.637 22:12:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.637 22:12:45 -- scripts/common.sh@368 -- # return 0 00:03:24.637 22:12:45 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.637 22:12:45 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.637 --rc genhtml_branch_coverage=1 00:03:24.637 --rc genhtml_function_coverage=1 00:03:24.637 --rc genhtml_legend=1 00:03:24.637 --rc geninfo_all_blocks=1 00:03:24.637 --rc geninfo_unexecuted_blocks=1 00:03:24.637 00:03:24.637 ' 00:03:24.637 22:12:45 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.637 --rc genhtml_branch_coverage=1 00:03:24.637 --rc genhtml_function_coverage=1 00:03:24.637 --rc genhtml_legend=1 00:03:24.637 --rc geninfo_all_blocks=1 00:03:24.637 --rc geninfo_unexecuted_blocks=1 00:03:24.637 00:03:24.637 ' 00:03:24.637 22:12:45 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.637 --rc genhtml_branch_coverage=1 00:03:24.637 --rc genhtml_function_coverage=1 00:03:24.637 --rc genhtml_legend=1 00:03:24.637 --rc geninfo_all_blocks=1 00:03:24.637 --rc geninfo_unexecuted_blocks=1 00:03:24.637 00:03:24.637 ' 00:03:24.637 22:12:45 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.637 --rc genhtml_branch_coverage=1 00:03:24.637 --rc genhtml_function_coverage=1 00:03:24.637 --rc genhtml_legend=1 00:03:24.637 --rc geninfo_all_blocks=1 00:03:24.637 --rc geninfo_unexecuted_blocks=1 00:03:24.637 00:03:24.637 ' 00:03:24.637 22:12:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:24.637 22:12:45 -- nvmf/common.sh@7 -- # uname -s 00:03:24.638 22:12:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.638 22:12:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.638 22:12:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.638 22:12:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.638 22:12:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.638 22:12:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.638 22:12:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.638 22:12:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.638 22:12:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.638 22:12:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.638 22:12:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:24.638 22:12:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:24.638 22:12:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.638 22:12:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.638 22:12:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:24.638 22:12:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.638 22:12:45 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:24.638 22:12:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.638 22:12:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.638 22:12:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.638 22:12:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.638 22:12:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.638 22:12:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.638 22:12:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.638 22:12:45 -- paths/export.sh@5 -- # export PATH 00:03:24.638 22:12:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.638 22:12:45 -- nvmf/common.sh@51 -- # : 0 00:03:24.638 22:12:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.638 22:12:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.638 22:12:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.638 22:12:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.638 22:12:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.638 22:12:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.638 22:12:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.638 22:12:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.638 22:12:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.638 22:12:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.638 22:12:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.638 22:12:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.638 22:12:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.638 22:12:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.638 22:12:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.638 22:12:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.638 22:12:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.638 22:12:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.638 22:12:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.638 22:12:45 -- spdk/autotest.sh@48 -- # udevadm_pid=88053 00:03:24.638 22:12:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.638 22:12:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.638 22:12:45 -- pm/common@17 -- # local monitor 00:03:24.638 22:12:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.638 22:12:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.638 22:12:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.638 22:12:45 -- pm/common@21 -- # date +%s 00:03:24.638 22:12:45 -- pm/common@21 -- # date +%s 00:03:24.638 22:12:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.638 22:12:45 -- pm/common@21 -- # date +%s 00:03:24.638 22:12:45 -- pm/common@25 -- # sleep 1 00:03:24.638 22:12:45 -- pm/common@21 -- # date +%s 00:03:24.638 22:12:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734210765 00:03:24.638 22:12:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734210765 00:03:24.638 22:12:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734210765 00:03:24.638 22:12:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734210765 00:03:24.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734210765_collect-cpu-load.pm.log 00:03:24.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734210765_collect-vmstat.pm.log 00:03:24.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734210765_collect-cpu-temp.pm.log 00:03:24.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734210765_collect-bmc-pm.bmc.pm.log 00:03:25.579 22:12:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.579 22:12:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.579 22:12:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.579 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:03:25.579 22:12:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.579 22:12:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:25.579 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:03:25.839 22:12:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:25.839 22:12:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.839 22:12:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.839 22:12:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:25.839 22:12:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.839 22:12:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.839 22:12:46 -- common/autotest_common.sh@1457 -- # uname 00:03:25.839 22:12:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:25.839 22:12:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.839 22:12:46 -- common/autotest_common.sh@1477 -- # uname 00:03:25.839 22:12:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:25.839 22:12:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:25.839 22:12:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.839 lcov: LCOV version 1.15 00:03:25.839 22:12:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:43.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.635 22:13:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:50.635 22:13:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.635 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:03:50.635 22:13:11 -- spdk/autotest.sh@78 -- # rm -f 00:03:50.635 22:13:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.292 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:53.292 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.292 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.552 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.552 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.552 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.552 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.552 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.552 22:13:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.552 22:13:14 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.552 22:13:14 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.552 22:13:14 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:53.552 22:13:14 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:53.552 22:13:14 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:53.552 22:13:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.552 22:13:14 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:53.552 22:13:14 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.552 22:13:14 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:53.552 22:13:14 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.552 22:13:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.552 22:13:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.552 22:13:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.552 22:13:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.552 22:13:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.552 22:13:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.552 22:13:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.552 22:13:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.552 No valid GPT data, bailing 00:03:53.552 22:13:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.552 22:13:14 -- scripts/common.sh@394 -- # pt= 00:03:53.552 22:13:14 -- scripts/common.sh@395 -- # return 1 00:03:53.553 22:13:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.553 1+0 records in 00:03:53.553 1+0 records out 00:03:53.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00152169 s, 689 MB/s 00:03:53.553 22:13:14 -- spdk/autotest.sh@105 -- # sync 00:03:53.553 22:13:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.553 22:13:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.553 22:13:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.134 22:13:19 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.134 22:13:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.134 22:13:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.134 22:13:19 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:02.047 Hugepages 00:04:02.047 node hugesize free / total 00:04:02.047 node0 1048576kB 0 / 0 00:04:02.047 node0 2048kB 0 / 0 00:04:02.047 node1 1048576kB 0 / 0 00:04:02.047 node1 2048kB 0 / 0 00:04:02.047 00:04:02.047 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.047 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:02.047 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:02.047 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:02.047 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:02.047 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:02.047 22:13:22 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.047 22:13:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.047 22:13:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.047 22:13:22 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.590 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.849 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.850 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.850 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.850 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.788 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.788 22:13:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.728 22:13:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.728 22:13:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.728 22:13:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.728 22:13:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.728 22:13:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.728 22:13:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.728 22:13:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.728 22:13:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.728 22:13:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.988 22:13:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.988 22:13:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:06.988 22:13:27 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.531 Waiting for block devices as requested 00:04:09.531 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:09.791 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.791 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.791 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.052 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.052 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.052 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.313 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.313 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:10.313 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:10.573 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:10.573 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.573 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.573 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.834 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.834 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.834 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:11.095 22:13:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.095 22:13:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:11.095 22:13:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.095 22:13:31 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:11.095 22:13:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.095 22:13:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:11.095 22:13:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.095 22:13:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:11.095 22:13:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:11.096 22:13:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:11.096 22:13:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:11.096 22:13:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.096 22:13:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.096 22:13:31 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:11.096 22:13:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.096 22:13:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.096 22:13:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:11.096 22:13:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.096 22:13:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.096 22:13:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.096 22:13:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.096 22:13:31 -- common/autotest_common.sh@1543 -- # continue 00:04:11.096 22:13:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:11.096 22:13:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.096 22:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:11.096 22:13:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:11.096 22:13:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.096 22:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:11.096 22:13:31 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.395 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.396 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.981 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.981 22:13:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:14.981 22:13:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.981 22:13:35 -- common/autotest_common.sh@10 -- # set +x 00:04:14.981 22:13:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:14.981 22:13:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:14.981 22:13:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.981 22:13:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:14.981 22:13:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:14.981 22:13:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:14.981 22:13:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:14.981 22:13:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:14.981 22:13:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.981 22:13:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.981 22:13:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.981 22:13:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.981 22:13:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.240 22:13:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:15.240 22:13:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:15.240 22:13:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.240 22:13:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:15.240 22:13:35 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:15.240 22:13:35 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:15.240 22:13:35 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:15.240 22:13:35 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:15.240 22:13:35 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:15.240 22:13:35 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:15.240 22:13:35 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=102016 00:04:15.240 22:13:35 -- common/autotest_common.sh@1585 -- # waitforlisten 102016 00:04:15.240 22:13:35 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.240 22:13:35 -- common/autotest_common.sh@835 -- # '[' -z 102016 ']' 00:04:15.240 22:13:35 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.240 22:13:35 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.241 22:13:35 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.241 22:13:35 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.241 22:13:35 -- common/autotest_common.sh@10 -- # set +x 00:04:15.241 [2024-12-14 22:13:35.968376] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:15.241 [2024-12-14 22:13:35.968422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102016 ] 00:04:15.241 [2024-12-14 22:13:36.044185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.241 [2024-12-14 22:13:36.066769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.500 22:13:36 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.500 22:13:36 -- common/autotest_common.sh@868 -- # return 0 00:04:15.500 22:13:36 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:15.500 22:13:36 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:15.500 22:13:36 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:18.798 nvme0n1 00:04:18.798 22:13:39 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:18.798 [2024-12-14 22:13:39.447363] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:18.798 [2024-12-14 22:13:39.447389] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:18.798 request: 00:04:18.798 { 00:04:18.798 "nvme_ctrlr_name": "nvme0", 00:04:18.798 "password": "test", 00:04:18.798 "method": "bdev_nvme_opal_revert", 00:04:18.798 "req_id": 1 00:04:18.798 } 00:04:18.798 Got JSON-RPC error response 00:04:18.798 response: 00:04:18.798 { 00:04:18.798 "code": -32603, 00:04:18.798 "message": "Internal error" 00:04:18.798 } 00:04:18.798 22:13:39 -- common/autotest_common.sh@1591 -- # true 00:04:18.798 22:13:39 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:18.798 22:13:39 -- common/autotest_common.sh@1595 -- # killprocess 102016 00:04:18.798 22:13:39 -- common/autotest_common.sh@954 -- # '[' -z 102016 ']' 00:04:18.798 22:13:39 -- common/autotest_common.sh@958 -- # kill -0 102016 00:04:18.798 22:13:39 -- common/autotest_common.sh@959 -- # uname 00:04:18.798 22:13:39 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.798 22:13:39 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102016 00:04:18.798 22:13:39 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.798 22:13:39 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.798 22:13:39 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102016' 00:04:18.798 killing process with pid 102016 00:04:18.798 22:13:39 -- common/autotest_common.sh@973 -- # kill 102016 00:04:18.798 22:13:39 -- common/autotest_common.sh@978 -- # wait 102016 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.798 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:18.799 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:20.180 22:13:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.180 22:13:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.180 22:13:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.180 22:13:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.180 22:13:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.180 22:13:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.180 22:13:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.180 22:13:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.180 22:13:41 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.180 22:13:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.180 22:13:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.180 22:13:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.440 ************************************ 00:04:20.440 START TEST env 00:04:20.440 ************************************ 00:04:20.440 22:13:41 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.440 * Looking for test storage... 00:04:20.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.440 22:13:41 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.440 22:13:41 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.440 22:13:41 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.440 22:13:41 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.440 22:13:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.440 22:13:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.440 22:13:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.440 22:13:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.441 22:13:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.441 22:13:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.441 22:13:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.441 22:13:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.441 22:13:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.441 22:13:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.441 22:13:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.441 22:13:41 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.441 22:13:41 env -- scripts/common.sh@345 -- # : 1 00:04:20.441 22:13:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.441 22:13:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.441 22:13:41 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.441 22:13:41 env -- scripts/common.sh@353 -- # local d=1 00:04:20.441 22:13:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.441 22:13:41 env -- scripts/common.sh@355 -- # echo 1 00:04:20.441 22:13:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.441 22:13:41 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.441 22:13:41 env -- scripts/common.sh@353 -- # local d=2 00:04:20.441 22:13:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.441 22:13:41 env -- scripts/common.sh@355 -- # echo 2 00:04:20.441 22:13:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.441 22:13:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.441 22:13:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.441 22:13:41 env -- scripts/common.sh@368 -- # return 0 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.441 --rc genhtml_branch_coverage=1 00:04:20.441 --rc genhtml_function_coverage=1 00:04:20.441 --rc genhtml_legend=1 00:04:20.441 --rc geninfo_all_blocks=1 00:04:20.441 --rc geninfo_unexecuted_blocks=1 00:04:20.441 00:04:20.441 ' 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.441 --rc genhtml_branch_coverage=1 00:04:20.441 --rc genhtml_function_coverage=1 00:04:20.441 --rc genhtml_legend=1 00:04:20.441 --rc geninfo_all_blocks=1 00:04:20.441 --rc geninfo_unexecuted_blocks=1 00:04:20.441 00:04:20.441 ' 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.441 --rc genhtml_branch_coverage=1 00:04:20.441 --rc genhtml_function_coverage=1 00:04:20.441 --rc genhtml_legend=1 00:04:20.441 --rc geninfo_all_blocks=1 00:04:20.441 --rc geninfo_unexecuted_blocks=1 00:04:20.441 00:04:20.441 ' 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.441 --rc genhtml_branch_coverage=1 00:04:20.441 --rc genhtml_function_coverage=1 00:04:20.441 --rc genhtml_legend=1 00:04:20.441 --rc geninfo_all_blocks=1 00:04:20.441 --rc geninfo_unexecuted_blocks=1 00:04:20.441 00:04:20.441 ' 00:04:20.441 22:13:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.441 22:13:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.441 22:13:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.441 ************************************ 00:04:20.441 START TEST env_memory 00:04:20.441 ************************************ 00:04:20.441 22:13:41 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.441 00:04:20.441 00:04:20.441 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.441 http://cunit.sourceforge.net/ 00:04:20.441 00:04:20.441 00:04:20.441 Suite: memory 00:04:20.702 Test: alloc and free memory map ...[2024-12-14 22:13:41.336844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.702 passed 00:04:20.702 Test: mem map translation ...[2024-12-14 22:13:41.355625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.702 [2024-12-14 22:13:41.355639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.702 [2024-12-14 22:13:41.355687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.702 [2024-12-14 22:13:41.355692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.702 passed 00:04:20.702 Test: mem map registration ...[2024-12-14 22:13:41.395164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.702 [2024-12-14 22:13:41.395178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.702 passed 00:04:20.702 Test: mem map adjacent registrations ...passed 00:04:20.702 00:04:20.702 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.702 suites 1 1 n/a 0 0 00:04:20.702 tests 4 4 4 0 0 00:04:20.702 asserts 152 152 152 0 n/a 00:04:20.702 00:04:20.702 Elapsed time = 0.127 seconds 00:04:20.702 00:04:20.702 real 0m0.136s 00:04:20.702 user 0m0.126s 00:04:20.702 sys 0m0.009s 00:04:20.702 22:13:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.702 22:13:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.702 ************************************ 00:04:20.702 END TEST env_memory 00:04:20.702 ************************************ 00:04:20.702 22:13:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.702 22:13:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.702 22:13:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.702 22:13:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.702 ************************************ 00:04:20.702 START TEST env_vtophys 00:04:20.702 ************************************ 00:04:20.702 22:13:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.702 EAL: lib.eal log level changed from notice to debug 00:04:20.702 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.702 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.702 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.702 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.702 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.702 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.702 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.702 EAL: Detected lcore 7 as core 8 on socket 0 00:04:20.702 EAL: Detected lcore 8 as core 9 on socket 0 00:04:20.702 EAL: Detected lcore 9 as core 10 on socket 0 00:04:20.702 EAL: Detected lcore 10 as core 11 on socket 0 00:04:20.702 EAL: Detected lcore 11 as core 12 on socket 0 00:04:20.702 EAL: Detected lcore 12 as core 13 on socket 0 00:04:20.702 EAL: Detected lcore 13 as core 16 on socket 0 00:04:20.702 EAL: Detected lcore 14 as core 17 on socket 0 00:04:20.702 EAL: Detected lcore 15 as core 18 on socket 0 00:04:20.702 EAL: Detected lcore 16 as core 19 on socket 0 00:04:20.702 EAL: Detected lcore 17 as core 20 on socket 0 00:04:20.702 EAL: Detected lcore 18 as core 21 on socket 0 00:04:20.702 EAL: Detected lcore 19 as core 25 on socket 0 00:04:20.702 EAL: Detected lcore 20 as core 26 on socket 0 00:04:20.702 EAL: Detected lcore 21 as core 27 on socket 0 00:04:20.702 EAL: Detected lcore 22 as core 28 on socket 0 00:04:20.702 EAL: Detected lcore 23 as core 29 on socket 0 00:04:20.702 EAL: Detected lcore 24 as core 0 on socket 1 00:04:20.702 EAL: Detected lcore 25 as core 1 on socket 1 00:04:20.702 EAL: Detected lcore 26 as core 2 on socket 1 00:04:20.702 EAL: Detected lcore 27 as core 3 on socket 1 00:04:20.702 EAL: Detected lcore 28 as core 4 on socket 1 00:04:20.702 EAL: Detected lcore 29 as core 5 on socket 1 00:04:20.702 EAL: Detected lcore 30 as core 6 on socket 1 00:04:20.702 EAL: Detected lcore 31 as core 8 on socket 1 00:04:20.702 EAL: Detected lcore 32 as core 9 on socket 1 00:04:20.702 EAL: Detected lcore 33 as core 10 on socket 1 00:04:20.702 EAL: Detected lcore 34 as core 11 on socket 1 00:04:20.702 EAL: Detected lcore 35 as core 12 on socket 1 00:04:20.702 EAL: Detected lcore 36 as core 13 on socket 1 00:04:20.702 EAL: Detected lcore 37 as core 16 on socket 1 00:04:20.702 EAL: Detected lcore 38 as core 17 on socket 1 00:04:20.702 EAL: Detected lcore 39 as core 18 on socket 1 00:04:20.702 EAL: Detected lcore 40 as core 19 on socket 1 00:04:20.702 EAL: Detected lcore 41 as core 20 on socket 1 00:04:20.702 EAL: Detected lcore 42 as core 21 on socket 1 00:04:20.702 EAL: Detected lcore 43 as core 25 on socket 1 00:04:20.702 EAL: Detected lcore 44 as core 26 on socket 1 00:04:20.702 EAL: Detected lcore 45 as core 27 on socket 1 00:04:20.702 EAL: Detected lcore 46 as core 28 on socket 1 00:04:20.702 EAL: Detected lcore 47 as core 29 on socket 1 00:04:20.702 EAL: Detected lcore 48 as core 0 on socket 0 00:04:20.702 EAL: Detected lcore 49 as core 1 on socket 0 00:04:20.702 EAL: Detected lcore 50 as core 2 on socket 0 00:04:20.702 EAL: Detected lcore 51 as core 3 on socket 0 00:04:20.703 EAL: Detected lcore 52 as core 4 on socket 0 00:04:20.703 EAL: Detected lcore 53 as core 5 on socket 0 00:04:20.703 EAL: Detected lcore 54 as core 6 on socket 0 00:04:20.703 EAL: Detected lcore 55 as core 8 on socket 0 00:04:20.703 EAL: Detected lcore 56 as core 9 on socket 0 00:04:20.703 EAL: Detected lcore 57 as core 10 on socket 0 00:04:20.703 EAL: Detected lcore 58 as core 11 on socket 0 00:04:20.703 EAL: Detected lcore 59 as core 12 on socket 0 00:04:20.703 EAL: Detected lcore 60 as core 13 on socket 0 00:04:20.703 EAL: Detected lcore 61 as core 16 on socket 0 00:04:20.703 EAL: Detected lcore 62 as core 17 on socket 0 00:04:20.703 EAL: Detected lcore 63 as core 18 on socket 0 00:04:20.703 EAL: Detected lcore 64 as core 19 on socket 0 00:04:20.703 EAL: Detected lcore 65 as core 20 on socket 0 00:04:20.703 EAL: Detected lcore 66 as core 21 on socket 0 00:04:20.703 EAL: Detected lcore 67 as core 25 on socket 0 00:04:20.703 EAL: Detected lcore 68 as core 26 on socket 0 00:04:20.703 EAL: Detected lcore 69 as core 27 on socket 0 00:04:20.703 EAL: Detected lcore 70 as core 28 on socket 0 00:04:20.703 EAL: Detected lcore 71 as core 29 on socket 0 00:04:20.703 EAL: Detected lcore 72 as core 0 on socket 1 00:04:20.703 EAL: Detected lcore 73 as core 1 on socket 1 00:04:20.703 EAL: Detected lcore 74 as core 2 on socket 1 00:04:20.703 EAL: Detected lcore 75 as core 3 on socket 1 00:04:20.703 EAL: Detected lcore 76 as core 4 on socket 1 00:04:20.703 EAL: Detected lcore 77 as core 5 on socket 1 00:04:20.703 EAL: Detected lcore 78 as core 6 on socket 1 00:04:20.703 EAL: Detected lcore 79 as core 8 on socket 1 00:04:20.703 EAL: Detected lcore 80 as core 9 on socket 1 00:04:20.703 EAL: Detected lcore 81 as core 10 on socket 1 00:04:20.703 EAL: Detected lcore 82 as core 11 on socket 1 00:04:20.703 EAL: Detected lcore 83 as core 12 on socket 1 00:04:20.703 EAL: Detected lcore 84 as core 13 on socket 1 00:04:20.703 EAL: Detected lcore 85 as core 16 on socket 1 00:04:20.703 EAL: Detected lcore 86 as core 17 on socket 1 00:04:20.703 EAL: Detected lcore 87 as core 18 on socket 1 00:04:20.703 EAL: Detected lcore 88 as core 19 on socket 1 00:04:20.703 EAL: Detected lcore 89 as core 20 on socket 1 00:04:20.703 EAL: Detected lcore 90 as core 21 on socket 1 00:04:20.703 EAL: Detected lcore 91 as core 25 on socket 1 00:04:20.703 EAL: Detected lcore 92 as core 26 on socket 1 00:04:20.703 EAL: Detected lcore 93 as core 27 on socket 1 00:04:20.703 EAL: Detected lcore 94 as core 28 on socket 1 00:04:20.703 EAL: Detected lcore 95 as core 29 on socket 1 00:04:20.703 EAL: Maximum logical cores by configuration: 128 00:04:20.703 EAL: Detected CPU lcores: 96 00:04:20.703 EAL: Detected NUMA nodes: 2 00:04:20.703 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:20.703 EAL: Detected shared linkage of DPDK 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:20.703 EAL: Registered [vdev] bus. 00:04:20.703 EAL: bus.vdev log level changed from disabled to notice 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:20.703 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:20.703 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:20.703 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:20.703 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: Bus pci wants IOVA as 'DC' 00:04:20.703 EAL: Bus vdev wants IOVA as 'DC' 00:04:20.703 EAL: Buses did not request a specific IOVA mode. 00:04:20.703 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.703 EAL: Selected IOVA mode 'VA' 00:04:20.703 EAL: Probing VFIO support... 00:04:20.703 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.703 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.703 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.703 EAL: VFIO support initialized 00:04:20.703 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.703 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.703 EAL: Setting up physically contiguous memory... 00:04:20.703 EAL: Setting maximum number of open files to 524288 00:04:20.703 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.703 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.703 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.703 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.703 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.703 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.703 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.703 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.703 EAL: Hugepages will be freed exactly as allocated. 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: TSC frequency is ~2100000 KHz 00:04:20.703 EAL: Main lcore 0 is ready (tid=7f4013b9ba00;cpuset=[0]) 00:04:20.703 EAL: Trying to obtain current memory policy. 00:04:20.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.703 EAL: Restoring previous memory policy: 0 00:04:20.703 EAL: request: mp_malloc_sync 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.703 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:20.703 EAL: probe driver: 8086:37d2 net_i40e 00:04:20.703 EAL: Not managed by a supported kernel driver, skipped 00:04:20.703 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:20.703 EAL: probe driver: 8086:37d2 net_i40e 00:04:20.703 EAL: Not managed by a supported kernel driver, skipped 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: No shared files mode enabled, IPC is disabled 00:04:20.703 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.964 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.964 00:04:20.964 00:04:20.964 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.964 http://cunit.sourceforge.net/ 00:04:20.964 00:04:20.964 00:04:20.964 Suite: components_suite 00:04:20.964 Test: vtophys_malloc_test ...passed 00:04:20.964 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.964 EAL: request: mp_malloc_sync 00:04:20.964 EAL: No shared files mode enabled, IPC is disabled 00:04:20.964 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.964 EAL: Trying to obtain current memory policy. 00:04:20.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.964 EAL: Restoring previous memory policy: 4 00:04:20.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.965 EAL: request: mp_malloc_sync 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:20.965 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.965 EAL: request: mp_malloc_sync 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:20.965 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.965 EAL: Trying to obtain current memory policy. 00:04:20.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.965 EAL: Restoring previous memory policy: 4 00:04:20.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.965 EAL: request: mp_malloc_sync 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:20.965 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.965 EAL: request: mp_malloc_sync 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:20.965 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.965 EAL: Trying to obtain current memory policy. 00:04:20.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.225 EAL: Restoring previous memory policy: 4 00:04:21.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.225 EAL: request: mp_malloc_sync 00:04:21.225 EAL: No shared files mode enabled, IPC is disabled 00:04:21.225 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.225 EAL: request: mp_malloc_sync 00:04:21.225 EAL: No shared files mode enabled, IPC is disabled 00:04:21.225 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.225 EAL: Trying to obtain current memory policy. 00:04:21.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.484 EAL: Restoring previous memory policy: 4 00:04:21.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.484 EAL: request: mp_malloc_sync 00:04:21.484 EAL: No shared files mode enabled, IPC is disabled 00:04:21.484 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.744 EAL: request: mp_malloc_sync 00:04:21.744 EAL: No shared files mode enabled, IPC is disabled 00:04:21.744 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.744 passed 00:04:21.744 00:04:21.744 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.744 suites 1 1 n/a 0 0 00:04:21.744 tests 2 2 2 0 0 00:04:21.744 asserts 497 497 497 0 n/a 00:04:21.744 00:04:21.744 Elapsed time = 0.969 seconds 00:04:21.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.745 EAL: request: mp_malloc_sync 00:04:21.745 EAL: No shared files mode enabled, IPC is disabled 00:04:21.745 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.745 EAL: No shared files mode enabled, IPC is disabled 00:04:21.745 EAL: No shared files mode enabled, IPC is disabled 00:04:21.745 EAL: No shared files mode enabled, IPC is disabled 00:04:21.745 00:04:21.745 real 0m1.090s 00:04:21.745 user 0m0.637s 00:04:21.745 sys 0m0.423s 00:04:21.745 22:13:42 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.745 22:13:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.745 ************************************ 00:04:21.745 END TEST env_vtophys 00:04:21.745 ************************************ 00:04:22.005 22:13:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.005 22:13:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.005 22:13:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.005 22:13:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.005 ************************************ 00:04:22.005 START TEST env_pci 00:04:22.005 ************************************ 00:04:22.005 22:13:42 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.005 00:04:22.005 00:04:22.005 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.005 http://cunit.sourceforge.net/ 00:04:22.005 00:04:22.005 00:04:22.005 Suite: pci 00:04:22.005 Test: pci_hook ...[2024-12-14 22:13:42.687317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103238 has claimed it 00:04:22.005 EAL: Cannot find device (10000:00:01.0) 00:04:22.005 EAL: Failed to attach device on primary process 00:04:22.005 passed 00:04:22.005 00:04:22.005 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.005 suites 1 1 n/a 0 0 00:04:22.005 tests 1 1 1 0 0 00:04:22.005 asserts 25 25 25 0 n/a 00:04:22.005 00:04:22.005 Elapsed time = 0.028 seconds 00:04:22.005 00:04:22.005 real 0m0.045s 00:04:22.005 user 0m0.019s 00:04:22.005 sys 0m0.026s 00:04:22.005 22:13:42 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.005 22:13:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.005 ************************************ 00:04:22.005 END TEST env_pci 00:04:22.005 ************************************ 00:04:22.005 22:13:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.005 22:13:42 env -- env/env.sh@15 -- # uname 00:04:22.005 22:13:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.005 22:13:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.005 22:13:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.005 22:13:42 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:22.005 22:13:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.005 22:13:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.005 ************************************ 00:04:22.005 START TEST env_dpdk_post_init 00:04:22.005 ************************************ 00:04:22.005 22:13:42 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.005 EAL: Detected CPU lcores: 96 00:04:22.005 EAL: Detected NUMA nodes: 2 00:04:22.005 EAL: Detected shared linkage of DPDK 00:04:22.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.005 EAL: Selected IOVA mode 'VA' 00:04:22.005 EAL: VFIO support initialized 00:04:22.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.265 EAL: Using IOMMU type 1 (Type 1) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:22.266 EAL: Ignore mapping IO port bar(1) 00:04:22.266 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:23.207 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:23.207 EAL: Ignore mapping IO port bar(1) 00:04:23.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:26.500 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:26.500 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:26.500 Starting DPDK initialization... 00:04:26.500 Starting SPDK post initialization... 00:04:26.500 SPDK NVMe probe 00:04:26.500 Attaching to 0000:5e:00.0 00:04:26.500 Attached to 0000:5e:00.0 00:04:26.500 Cleaning up... 00:04:26.500 00:04:26.500 real 0m4.353s 00:04:26.500 user 0m3.248s 00:04:26.500 sys 0m0.181s 00:04:26.500 22:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.500 22:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.500 ************************************ 00:04:26.500 END TEST env_dpdk_post_init 00:04:26.500 ************************************ 00:04:26.500 22:13:47 env -- env/env.sh@26 -- # uname 00:04:26.500 22:13:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.500 22:13:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.500 22:13:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.500 22:13:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.500 22:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.500 ************************************ 00:04:26.500 START TEST env_mem_callbacks 00:04:26.500 ************************************ 00:04:26.500 22:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.500 EAL: Detected CPU lcores: 96 00:04:26.500 EAL: Detected NUMA nodes: 2 00:04:26.500 EAL: Detected shared linkage of DPDK 00:04:26.500 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.500 EAL: Selected IOVA mode 'VA' 00:04:26.500 EAL: VFIO support initialized 00:04:26.500 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.500 00:04:26.500 00:04:26.500 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.500 http://cunit.sourceforge.net/ 00:04:26.500 00:04:26.500 00:04:26.500 Suite: memory 00:04:26.500 Test: test ... 00:04:26.500 register 0x200000200000 2097152 00:04:26.500 malloc 3145728 00:04:26.500 register 0x200000400000 4194304 00:04:26.500 buf 0x200000500000 len 3145728 PASSED 00:04:26.500 malloc 64 00:04:26.500 buf 0x2000004fff40 len 64 PASSED 00:04:26.500 malloc 4194304 00:04:26.500 register 0x200000800000 6291456 00:04:26.500 buf 0x200000a00000 len 4194304 PASSED 00:04:26.500 free 0x200000500000 3145728 00:04:26.500 free 0x2000004fff40 64 00:04:26.500 unregister 0x200000400000 4194304 PASSED 00:04:26.500 free 0x200000a00000 4194304 00:04:26.500 unregister 0x200000800000 6291456 PASSED 00:04:26.500 malloc 8388608 00:04:26.500 register 0x200000400000 10485760 00:04:26.500 buf 0x200000600000 len 8388608 PASSED 00:04:26.500 free 0x200000600000 8388608 00:04:26.500 unregister 0x200000400000 10485760 PASSED 00:04:26.500 passed 00:04:26.500 00:04:26.500 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.500 suites 1 1 n/a 0 0 00:04:26.500 tests 1 1 1 0 0 00:04:26.500 asserts 15 15 15 0 n/a 00:04:26.500 00:04:26.500 Elapsed time = 0.007 seconds 00:04:26.500 00:04:26.500 real 0m0.054s 00:04:26.500 user 0m0.014s 00:04:26.500 sys 0m0.040s 00:04:26.500 22:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.500 22:13:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.500 ************************************ 00:04:26.500 END TEST env_mem_callbacks 00:04:26.500 ************************************ 00:04:26.500 00:04:26.500 real 0m6.227s 00:04:26.500 user 0m4.289s 00:04:26.500 sys 0m1.020s 00:04:26.500 22:13:47 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.500 22:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.500 ************************************ 00:04:26.500 END TEST env 00:04:26.500 ************************************ 00:04:26.500 22:13:47 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.500 22:13:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.500 22:13:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.500 22:13:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.760 ************************************ 00:04:26.760 START TEST rpc 00:04:26.760 ************************************ 00:04:26.760 22:13:47 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.760 * Looking for test storage... 00:04:26.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.760 22:13:47 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.760 22:13:47 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.760 22:13:47 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.761 22:13:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.761 22:13:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.761 22:13:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.761 22:13:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.761 22:13:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.761 22:13:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.761 22:13:47 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.761 22:13:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.761 22:13:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.761 22:13:47 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.761 22:13:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.761 22:13:47 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.761 22:13:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.761 22:13:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.761 22:13:47 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.761 --rc genhtml_branch_coverage=1 00:04:26.761 --rc genhtml_function_coverage=1 00:04:26.761 --rc genhtml_legend=1 00:04:26.761 --rc geninfo_all_blocks=1 00:04:26.761 --rc geninfo_unexecuted_blocks=1 00:04:26.761 00:04:26.761 ' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.761 --rc genhtml_branch_coverage=1 00:04:26.761 --rc genhtml_function_coverage=1 00:04:26.761 --rc genhtml_legend=1 00:04:26.761 --rc geninfo_all_blocks=1 00:04:26.761 --rc geninfo_unexecuted_blocks=1 00:04:26.761 00:04:26.761 ' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.761 --rc genhtml_branch_coverage=1 00:04:26.761 --rc genhtml_function_coverage=1 00:04:26.761 --rc genhtml_legend=1 00:04:26.761 --rc geninfo_all_blocks=1 00:04:26.761 --rc geninfo_unexecuted_blocks=1 00:04:26.761 00:04:26.761 ' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.761 --rc genhtml_branch_coverage=1 00:04:26.761 --rc genhtml_function_coverage=1 00:04:26.761 --rc genhtml_legend=1 00:04:26.761 --rc geninfo_all_blocks=1 00:04:26.761 --rc geninfo_unexecuted_blocks=1 00:04:26.761 00:04:26.761 ' 00:04:26.761 22:13:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104108 00:04:26.761 22:13:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.761 22:13:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.761 22:13:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104108 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 104108 ']' 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.761 22:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.761 [2024-12-14 22:13:47.616807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:26.761 [2024-12-14 22:13:47.616851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104108 ] 00:04:27.021 [2024-12-14 22:13:47.689519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.021 [2024-12-14 22:13:47.711859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.021 [2024-12-14 22:13:47.711893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104108' to capture a snapshot of events at runtime. 00:04:27.021 [2024-12-14 22:13:47.711900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.021 [2024-12-14 22:13:47.711910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.021 [2024-12-14 22:13:47.711915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104108 for offline analysis/debug. 00:04:27.021 [2024-12-14 22:13:47.712424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.281 22:13:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.281 22:13:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.281 22:13:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.281 22:13:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.281 22:13:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.281 22:13:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.281 22:13:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.281 22:13:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.281 22:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.281 ************************************ 00:04:27.281 START TEST rpc_integrity 00:04:27.281 ************************************ 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.281 22:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.281 22:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.281 22:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.281 22:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.281 22:13:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.281 22:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.281 { 00:04:27.281 "name": "Malloc0", 00:04:27.281 "aliases": [ 00:04:27.281 "e172997c-f749-4e29-9175-32042840faee" 00:04:27.281 ], 00:04:27.281 "product_name": "Malloc disk", 00:04:27.281 "block_size": 512, 00:04:27.281 "num_blocks": 16384, 00:04:27.281 "uuid": "e172997c-f749-4e29-9175-32042840faee", 00:04:27.281 "assigned_rate_limits": { 00:04:27.281 "rw_ios_per_sec": 0, 00:04:27.281 "rw_mbytes_per_sec": 0, 00:04:27.281 "r_mbytes_per_sec": 0, 00:04:27.281 "w_mbytes_per_sec": 0 00:04:27.281 }, 00:04:27.281 "claimed": false, 00:04:27.281 "zoned": false, 00:04:27.281 "supported_io_types": { 00:04:27.281 "read": true, 00:04:27.281 "write": true, 00:04:27.281 "unmap": true, 00:04:27.281 "flush": true, 00:04:27.281 "reset": true, 00:04:27.281 "nvme_admin": false, 00:04:27.281 "nvme_io": false, 00:04:27.281 "nvme_io_md": false, 00:04:27.281 "write_zeroes": true, 00:04:27.281 "zcopy": true, 00:04:27.281 "get_zone_info": false, 00:04:27.281 "zone_management": false, 00:04:27.281 "zone_append": false, 00:04:27.281 "compare": false, 00:04:27.281 "compare_and_write": false, 00:04:27.281 "abort": true, 00:04:27.281 "seek_hole": false, 00:04:27.281 "seek_data": false, 00:04:27.281 "copy": true, 00:04:27.281 "nvme_iov_md": false 00:04:27.281 }, 00:04:27.281 "memory_domains": [ 00:04:27.281 { 00:04:27.281 "dma_device_id": "system", 00:04:27.281 "dma_device_type": 1 00:04:27.281 }, 00:04:27.281 { 00:04:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.281 "dma_device_type": 2 00:04:27.281 } 00:04:27.281 ], 00:04:27.281 "driver_specific": {} 00:04:27.281 } 00:04:27.281 ]' 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.281 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.281 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.281 [2024-12-14 22:13:48.065880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.281 [2024-12-14 22:13:48.065912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.281 [2024-12-14 22:13:48.065925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1daba00 00:04:27.281 [2024-12-14 22:13:48.065932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.281 [2024-12-14 22:13:48.066983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.282 [2024-12-14 22:13:48.067004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.282 Passthru0 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.282 { 00:04:27.282 "name": "Malloc0", 00:04:27.282 "aliases": [ 00:04:27.282 "e172997c-f749-4e29-9175-32042840faee" 00:04:27.282 ], 00:04:27.282 "product_name": "Malloc disk", 00:04:27.282 "block_size": 512, 00:04:27.282 "num_blocks": 16384, 00:04:27.282 "uuid": "e172997c-f749-4e29-9175-32042840faee", 00:04:27.282 "assigned_rate_limits": { 00:04:27.282 "rw_ios_per_sec": 0, 00:04:27.282 "rw_mbytes_per_sec": 0, 00:04:27.282 "r_mbytes_per_sec": 0, 00:04:27.282 "w_mbytes_per_sec": 0 00:04:27.282 }, 00:04:27.282 "claimed": true, 00:04:27.282 "claim_type": "exclusive_write", 00:04:27.282 "zoned": false, 00:04:27.282 "supported_io_types": { 00:04:27.282 "read": true, 00:04:27.282 "write": true, 00:04:27.282 "unmap": true, 00:04:27.282 "flush": true, 00:04:27.282 "reset": true, 00:04:27.282 "nvme_admin": false, 00:04:27.282 "nvme_io": false, 00:04:27.282 "nvme_io_md": false, 00:04:27.282 "write_zeroes": true, 00:04:27.282 "zcopy": true, 00:04:27.282 "get_zone_info": false, 00:04:27.282 "zone_management": false, 00:04:27.282 "zone_append": false, 00:04:27.282 "compare": false, 00:04:27.282 "compare_and_write": false, 00:04:27.282 "abort": true, 00:04:27.282 "seek_hole": false, 00:04:27.282 "seek_data": false, 00:04:27.282 "copy": true, 00:04:27.282 "nvme_iov_md": false 00:04:27.282 }, 00:04:27.282 "memory_domains": [ 00:04:27.282 { 00:04:27.282 "dma_device_id": "system", 00:04:27.282 "dma_device_type": 1 00:04:27.282 }, 00:04:27.282 { 00:04:27.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.282 "dma_device_type": 2 00:04:27.282 } 00:04:27.282 ], 00:04:27.282 "driver_specific": {} 00:04:27.282 }, 00:04:27.282 { 00:04:27.282 "name": "Passthru0", 00:04:27.282 "aliases": [ 00:04:27.282 "790c40fa-08d3-5503-a148-aa35f75cc358" 00:04:27.282 ], 00:04:27.282 "product_name": "passthru", 00:04:27.282 "block_size": 512, 00:04:27.282 "num_blocks": 16384, 00:04:27.282 "uuid": "790c40fa-08d3-5503-a148-aa35f75cc358", 00:04:27.282 "assigned_rate_limits": { 00:04:27.282 "rw_ios_per_sec": 0, 00:04:27.282 "rw_mbytes_per_sec": 0, 00:04:27.282 "r_mbytes_per_sec": 0, 00:04:27.282 "w_mbytes_per_sec": 0 00:04:27.282 }, 00:04:27.282 "claimed": false, 00:04:27.282 "zoned": false, 00:04:27.282 "supported_io_types": { 00:04:27.282 "read": true, 00:04:27.282 "write": true, 00:04:27.282 "unmap": true, 00:04:27.282 "flush": true, 00:04:27.282 "reset": true, 00:04:27.282 "nvme_admin": false, 00:04:27.282 "nvme_io": false, 00:04:27.282 "nvme_io_md": false, 00:04:27.282 "write_zeroes": true, 00:04:27.282 "zcopy": true, 00:04:27.282 "get_zone_info": false, 00:04:27.282 "zone_management": false, 00:04:27.282 "zone_append": false, 00:04:27.282 "compare": false, 00:04:27.282 "compare_and_write": false, 00:04:27.282 "abort": true, 00:04:27.282 "seek_hole": false, 00:04:27.282 "seek_data": false, 00:04:27.282 "copy": true, 00:04:27.282 "nvme_iov_md": false 00:04:27.282 }, 00:04:27.282 "memory_domains": [ 00:04:27.282 { 00:04:27.282 "dma_device_id": "system", 00:04:27.282 "dma_device_type": 1 00:04:27.282 }, 00:04:27.282 { 00:04:27.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.282 "dma_device_type": 2 00:04:27.282 } 00:04:27.282 ], 00:04:27.282 "driver_specific": { 00:04:27.282 "passthru": { 00:04:27.282 "name": "Passthru0", 00:04:27.282 "base_bdev_name": "Malloc0" 00:04:27.282 } 00:04:27.282 } 00:04:27.282 } 00:04:27.282 ]' 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.282 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.282 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.542 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.542 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.542 22:13:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.542 00:04:27.542 real 0m0.266s 00:04:27.542 user 0m0.161s 00:04:27.542 sys 0m0.040s 00:04:27.542 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 ************************************ 00:04:27.542 END TEST rpc_integrity 00:04:27.542 ************************************ 00:04:27.542 22:13:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.542 22:13:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.542 22:13:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.542 22:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 ************************************ 00:04:27.542 START TEST rpc_plugins 00:04:27.542 ************************************ 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.542 { 00:04:27.542 "name": "Malloc1", 00:04:27.542 "aliases": [ 00:04:27.542 "8b48ca59-56e9-41ed-a84f-ddd3904cc86d" 00:04:27.542 ], 00:04:27.542 "product_name": "Malloc disk", 00:04:27.542 "block_size": 4096, 00:04:27.542 "num_blocks": 256, 00:04:27.542 "uuid": "8b48ca59-56e9-41ed-a84f-ddd3904cc86d", 00:04:27.542 "assigned_rate_limits": { 00:04:27.542 "rw_ios_per_sec": 0, 00:04:27.542 "rw_mbytes_per_sec": 0, 00:04:27.542 "r_mbytes_per_sec": 0, 00:04:27.542 "w_mbytes_per_sec": 0 00:04:27.542 }, 00:04:27.542 "claimed": false, 00:04:27.542 "zoned": false, 00:04:27.542 "supported_io_types": { 00:04:27.542 "read": true, 00:04:27.542 "write": true, 00:04:27.542 "unmap": true, 00:04:27.542 "flush": true, 00:04:27.542 "reset": true, 00:04:27.542 "nvme_admin": false, 00:04:27.542 "nvme_io": false, 00:04:27.542 "nvme_io_md": false, 00:04:27.542 "write_zeroes": true, 00:04:27.542 "zcopy": true, 00:04:27.542 "get_zone_info": false, 00:04:27.542 "zone_management": false, 00:04:27.542 "zone_append": false, 00:04:27.542 "compare": false, 00:04:27.542 "compare_and_write": false, 00:04:27.542 "abort": true, 00:04:27.542 "seek_hole": false, 00:04:27.542 "seek_data": false, 00:04:27.542 "copy": true, 00:04:27.542 "nvme_iov_md": false 00:04:27.542 }, 00:04:27.542 "memory_domains": [ 00:04:27.542 { 00:04:27.542 "dma_device_id": "system", 00:04:27.542 "dma_device_type": 1 00:04:27.542 }, 00:04:27.542 { 00:04:27.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.542 "dma_device_type": 2 00:04:27.542 } 00:04:27.542 ], 00:04:27.542 "driver_specific": {} 00:04:27.542 } 00:04:27.542 ]' 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.542 22:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.542 00:04:27.542 real 0m0.145s 00:04:27.542 user 0m0.086s 00:04:27.542 sys 0m0.018s 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.542 22:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.802 ************************************ 00:04:27.802 END TEST rpc_plugins 00:04:27.802 ************************************ 00:04:27.802 22:13:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.802 22:13:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.802 22:13:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.802 22:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.802 ************************************ 00:04:27.802 START TEST rpc_trace_cmd_test 00:04:27.802 ************************************ 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.802 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104108", 00:04:27.802 "tpoint_group_mask": "0x8", 00:04:27.802 "iscsi_conn": { 00:04:27.802 "mask": "0x2", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "scsi": { 00:04:27.802 "mask": "0x4", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "bdev": { 00:04:27.802 "mask": "0x8", 00:04:27.802 "tpoint_mask": "0xffffffffffffffff" 00:04:27.802 }, 00:04:27.802 "nvmf_rdma": { 00:04:27.802 "mask": "0x10", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "nvmf_tcp": { 00:04:27.802 "mask": "0x20", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "ftl": { 00:04:27.802 "mask": "0x40", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "blobfs": { 00:04:27.802 "mask": "0x80", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "dsa": { 00:04:27.802 "mask": "0x200", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "thread": { 00:04:27.802 "mask": "0x400", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "nvme_pcie": { 00:04:27.802 "mask": "0x800", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "iaa": { 00:04:27.802 "mask": "0x1000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "nvme_tcp": { 00:04:27.802 "mask": "0x2000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "bdev_nvme": { 00:04:27.802 "mask": "0x4000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "sock": { 00:04:27.802 "mask": "0x8000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "blob": { 00:04:27.802 "mask": "0x10000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "bdev_raid": { 00:04:27.802 "mask": "0x20000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 }, 00:04:27.802 "scheduler": { 00:04:27.802 "mask": "0x40000", 00:04:27.802 "tpoint_mask": "0x0" 00:04:27.802 } 00:04:27.802 }' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.802 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.062 22:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.062 00:04:28.062 real 0m0.206s 00:04:28.062 user 0m0.179s 00:04:28.062 sys 0m0.020s 00:04:28.062 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.062 22:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 END TEST rpc_trace_cmd_test 00:04:28.062 ************************************ 00:04:28.062 22:13:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.062 22:13:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.062 22:13:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.062 22:13:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.062 22:13:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.062 22:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 START TEST rpc_daemon_integrity 00:04:28.062 ************************************ 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.062 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.062 { 00:04:28.062 "name": "Malloc2", 00:04:28.062 "aliases": [ 00:04:28.062 "368af381-8ab5-4bb2-b998-fed7e28e4d59" 00:04:28.062 ], 00:04:28.062 "product_name": "Malloc disk", 00:04:28.062 "block_size": 512, 00:04:28.062 "num_blocks": 16384, 00:04:28.062 "uuid": "368af381-8ab5-4bb2-b998-fed7e28e4d59", 00:04:28.062 "assigned_rate_limits": { 00:04:28.062 "rw_ios_per_sec": 0, 00:04:28.063 "rw_mbytes_per_sec": 0, 00:04:28.063 "r_mbytes_per_sec": 0, 00:04:28.063 "w_mbytes_per_sec": 0 00:04:28.063 }, 00:04:28.063 "claimed": false, 00:04:28.063 "zoned": false, 00:04:28.063 "supported_io_types": { 00:04:28.063 "read": true, 00:04:28.063 "write": true, 00:04:28.063 "unmap": true, 00:04:28.063 "flush": true, 00:04:28.063 "reset": true, 00:04:28.063 "nvme_admin": false, 00:04:28.063 "nvme_io": false, 00:04:28.063 "nvme_io_md": false, 00:04:28.063 "write_zeroes": true, 00:04:28.063 "zcopy": true, 00:04:28.063 "get_zone_info": false, 00:04:28.063 "zone_management": false, 00:04:28.063 "zone_append": false, 00:04:28.063 "compare": false, 00:04:28.063 "compare_and_write": false, 00:04:28.063 "abort": true, 00:04:28.063 "seek_hole": false, 00:04:28.063 "seek_data": false, 00:04:28.063 "copy": true, 00:04:28.063 "nvme_iov_md": false 00:04:28.063 }, 00:04:28.063 "memory_domains": [ 00:04:28.063 { 00:04:28.063 "dma_device_id": "system", 00:04:28.063 "dma_device_type": 1 00:04:28.063 }, 00:04:28.063 { 00:04:28.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.063 "dma_device_type": 2 00:04:28.063 } 00:04:28.063 ], 00:04:28.063 "driver_specific": {} 00:04:28.063 } 00:04:28.063 ]' 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.063 [2024-12-14 22:13:48.904144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.063 [2024-12-14 22:13:48.904169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.063 [2024-12-14 22:13:48.904181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c69ac0 00:04:28.063 [2024-12-14 22:13:48.904187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.063 [2024-12-14 22:13:48.905117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.063 [2024-12-14 22:13:48.905136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.063 Passthru0 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.063 { 00:04:28.063 "name": "Malloc2", 00:04:28.063 "aliases": [ 00:04:28.063 "368af381-8ab5-4bb2-b998-fed7e28e4d59" 00:04:28.063 ], 00:04:28.063 "product_name": "Malloc disk", 00:04:28.063 "block_size": 512, 00:04:28.063 "num_blocks": 16384, 00:04:28.063 "uuid": "368af381-8ab5-4bb2-b998-fed7e28e4d59", 00:04:28.063 "assigned_rate_limits": { 00:04:28.063 "rw_ios_per_sec": 0, 00:04:28.063 "rw_mbytes_per_sec": 0, 00:04:28.063 "r_mbytes_per_sec": 0, 00:04:28.063 "w_mbytes_per_sec": 0 00:04:28.063 }, 00:04:28.063 "claimed": true, 00:04:28.063 "claim_type": "exclusive_write", 00:04:28.063 "zoned": false, 00:04:28.063 "supported_io_types": { 00:04:28.063 "read": true, 00:04:28.063 "write": true, 00:04:28.063 "unmap": true, 00:04:28.063 "flush": true, 00:04:28.063 "reset": true, 00:04:28.063 "nvme_admin": false, 00:04:28.063 "nvme_io": false, 00:04:28.063 "nvme_io_md": false, 00:04:28.063 "write_zeroes": true, 00:04:28.063 "zcopy": true, 00:04:28.063 "get_zone_info": false, 00:04:28.063 "zone_management": false, 00:04:28.063 "zone_append": false, 00:04:28.063 "compare": false, 00:04:28.063 "compare_and_write": false, 00:04:28.063 "abort": true, 00:04:28.063 "seek_hole": false, 00:04:28.063 "seek_data": false, 00:04:28.063 "copy": true, 00:04:28.063 "nvme_iov_md": false 00:04:28.063 }, 00:04:28.063 "memory_domains": [ 00:04:28.063 { 00:04:28.063 "dma_device_id": "system", 00:04:28.063 "dma_device_type": 1 00:04:28.063 }, 00:04:28.063 { 00:04:28.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.063 "dma_device_type": 2 00:04:28.063 } 00:04:28.063 ], 00:04:28.063 "driver_specific": {} 00:04:28.063 }, 00:04:28.063 { 00:04:28.063 "name": "Passthru0", 00:04:28.063 "aliases": [ 00:04:28.063 "a2828ec2-9e7b-5957-8343-96f2184d0458" 00:04:28.063 ], 00:04:28.063 "product_name": "passthru", 00:04:28.063 "block_size": 512, 00:04:28.063 "num_blocks": 16384, 00:04:28.063 "uuid": "a2828ec2-9e7b-5957-8343-96f2184d0458", 00:04:28.063 "assigned_rate_limits": { 00:04:28.063 "rw_ios_per_sec": 0, 00:04:28.063 "rw_mbytes_per_sec": 0, 00:04:28.063 "r_mbytes_per_sec": 0, 00:04:28.063 "w_mbytes_per_sec": 0 00:04:28.063 }, 00:04:28.063 "claimed": false, 00:04:28.063 "zoned": false, 00:04:28.063 "supported_io_types": { 00:04:28.063 "read": true, 00:04:28.063 "write": true, 00:04:28.063 "unmap": true, 00:04:28.063 "flush": true, 00:04:28.063 "reset": true, 00:04:28.063 "nvme_admin": false, 00:04:28.063 "nvme_io": false, 00:04:28.063 "nvme_io_md": false, 00:04:28.063 "write_zeroes": true, 00:04:28.063 "zcopy": true, 00:04:28.063 "get_zone_info": false, 00:04:28.063 "zone_management": false, 00:04:28.063 "zone_append": false, 00:04:28.063 "compare": false, 00:04:28.063 "compare_and_write": false, 00:04:28.063 "abort": true, 00:04:28.063 "seek_hole": false, 00:04:28.063 "seek_data": false, 00:04:28.063 "copy": true, 00:04:28.063 "nvme_iov_md": false 00:04:28.063 }, 00:04:28.063 "memory_domains": [ 00:04:28.063 { 00:04:28.063 "dma_device_id": "system", 00:04:28.063 "dma_device_type": 1 00:04:28.063 }, 00:04:28.063 { 00:04:28.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.063 "dma_device_type": 2 00:04:28.063 } 00:04:28.063 ], 00:04:28.063 "driver_specific": { 00:04:28.063 "passthru": { 00:04:28.063 "name": "Passthru0", 00:04:28.063 "base_bdev_name": "Malloc2" 00:04:28.063 } 00:04:28.063 } 00:04:28.063 } 00:04:28.063 ]' 00:04:28.063 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.323 22:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.323 00:04:28.323 real 0m0.275s 00:04:28.323 user 0m0.167s 00:04:28.323 sys 0m0.039s 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.323 22:13:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.323 ************************************ 00:04:28.323 END TEST rpc_daemon_integrity 00:04:28.323 ************************************ 00:04:28.323 22:13:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.323 22:13:49 rpc -- rpc/rpc.sh@84 -- # killprocess 104108 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 104108 ']' 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@958 -- # kill -0 104108 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104108 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104108' 00:04:28.323 killing process with pid 104108 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@973 -- # kill 104108 00:04:28.323 22:13:49 rpc -- common/autotest_common.sh@978 -- # wait 104108 00:04:28.583 00:04:28.583 real 0m2.034s 00:04:28.583 user 0m2.575s 00:04:28.583 sys 0m0.699s 00:04:28.583 22:13:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.583 22:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.583 ************************************ 00:04:28.583 END TEST rpc 00:04:28.583 ************************************ 00:04:28.583 22:13:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.583 22:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.583 22:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.583 22:13:49 -- common/autotest_common.sh@10 -- # set +x 00:04:28.843 ************************************ 00:04:28.843 START TEST skip_rpc 00:04:28.843 ************************************ 00:04:28.843 22:13:49 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.843 * Looking for test storage... 00:04:28.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.843 22:13:49 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.843 22:13:49 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.843 22:13:49 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.843 22:13:49 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.843 22:13:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.844 22:13:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.844 --rc genhtml_branch_coverage=1 00:04:28.844 --rc genhtml_function_coverage=1 00:04:28.844 --rc genhtml_legend=1 00:04:28.844 --rc geninfo_all_blocks=1 00:04:28.844 --rc geninfo_unexecuted_blocks=1 00:04:28.844 00:04:28.844 ' 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.844 --rc genhtml_branch_coverage=1 00:04:28.844 --rc genhtml_function_coverage=1 00:04:28.844 --rc genhtml_legend=1 00:04:28.844 --rc geninfo_all_blocks=1 00:04:28.844 --rc geninfo_unexecuted_blocks=1 00:04:28.844 00:04:28.844 ' 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.844 --rc genhtml_branch_coverage=1 00:04:28.844 --rc genhtml_function_coverage=1 00:04:28.844 --rc genhtml_legend=1 00:04:28.844 --rc geninfo_all_blocks=1 00:04:28.844 --rc geninfo_unexecuted_blocks=1 00:04:28.844 00:04:28.844 ' 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.844 --rc genhtml_branch_coverage=1 00:04:28.844 --rc genhtml_function_coverage=1 00:04:28.844 --rc genhtml_legend=1 00:04:28.844 --rc geninfo_all_blocks=1 00:04:28.844 --rc geninfo_unexecuted_blocks=1 00:04:28.844 00:04:28.844 ' 00:04:28.844 22:13:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.844 22:13:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.844 22:13:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.844 22:13:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.844 ************************************ 00:04:28.844 START TEST skip_rpc 00:04:28.844 ************************************ 00:04:28.844 22:13:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:28.844 22:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104731 00:04:28.844 22:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.844 22:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:28.844 22:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.104 [2024-12-14 22:13:49.753178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:29.104 [2024-12-14 22:13:49.753216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104731 ] 00:04:29.104 [2024-12-14 22:13:49.826966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.104 [2024-12-14 22:13:49.848996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.382 22:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104731 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104731 ']' 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104731 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104731 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104731' 00:04:34.383 killing process with pid 104731 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104731 00:04:34.383 22:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104731 00:04:34.383 00:04:34.383 real 0m5.359s 00:04:34.383 user 0m5.111s 00:04:34.383 sys 0m0.289s 00:04:34.383 22:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.383 22:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 ************************************ 00:04:34.383 END TEST skip_rpc 00:04:34.383 ************************************ 00:04:34.383 22:13:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.383 22:13:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.383 22:13:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.383 22:13:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 ************************************ 00:04:34.383 START TEST skip_rpc_with_json 00:04:34.383 ************************************ 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105653 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105653 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105653 ']' 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.383 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 [2024-12-14 22:13:55.183260] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:34.383 [2024-12-14 22:13:55.183303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105653 ] 00:04:34.383 [2024-12-14 22:13:55.255717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.643 [2024-12-14 22:13:55.276199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 [2024-12-14 22:13:55.491899] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.643 request: 00:04:34.643 { 00:04:34.643 "trtype": "tcp", 00:04:34.643 "method": "nvmf_get_transports", 00:04:34.643 "req_id": 1 00:04:34.643 } 00:04:34.643 Got JSON-RPC error response 00:04:34.643 response: 00:04:34.643 { 00:04:34.643 "code": -19, 00:04:34.643 "message": "No such device" 00:04:34.643 } 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 [2024-12-14 22:13:55.504014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.903 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.903 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.903 { 00:04:34.903 "subsystems": [ 00:04:34.903 { 00:04:34.903 "subsystem": "fsdev", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "fsdev_set_opts", 00:04:34.903 "params": { 00:04:34.903 "fsdev_io_pool_size": 65535, 00:04:34.903 "fsdev_io_cache_size": 256 00:04:34.903 } 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "vfio_user_target", 00:04:34.903 "config": null 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "keyring", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "iobuf", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "iobuf_set_options", 00:04:34.903 "params": { 00:04:34.903 "small_pool_count": 8192, 00:04:34.903 "large_pool_count": 1024, 00:04:34.903 "small_bufsize": 8192, 00:04:34.903 "large_bufsize": 135168, 00:04:34.903 "enable_numa": false 00:04:34.903 } 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "sock", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "sock_set_default_impl", 00:04:34.903 "params": { 00:04:34.903 "impl_name": "posix" 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "sock_impl_set_options", 00:04:34.903 "params": { 00:04:34.903 "impl_name": "ssl", 00:04:34.903 "recv_buf_size": 4096, 00:04:34.903 "send_buf_size": 4096, 00:04:34.903 "enable_recv_pipe": true, 00:04:34.903 "enable_quickack": false, 00:04:34.903 "enable_placement_id": 0, 00:04:34.903 "enable_zerocopy_send_server": true, 00:04:34.903 "enable_zerocopy_send_client": false, 00:04:34.903 "zerocopy_threshold": 0, 00:04:34.903 "tls_version": 0, 00:04:34.903 "enable_ktls": false 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "sock_impl_set_options", 00:04:34.903 "params": { 00:04:34.903 "impl_name": "posix", 00:04:34.903 "recv_buf_size": 2097152, 00:04:34.903 "send_buf_size": 2097152, 00:04:34.903 "enable_recv_pipe": true, 00:04:34.903 "enable_quickack": false, 00:04:34.903 "enable_placement_id": 0, 00:04:34.903 "enable_zerocopy_send_server": true, 00:04:34.903 "enable_zerocopy_send_client": false, 00:04:34.903 "zerocopy_threshold": 0, 00:04:34.903 "tls_version": 0, 00:04:34.903 "enable_ktls": false 00:04:34.903 } 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "vmd", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "accel", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "accel_set_options", 00:04:34.903 "params": { 00:04:34.903 "small_cache_size": 128, 00:04:34.903 "large_cache_size": 16, 00:04:34.903 "task_count": 2048, 00:04:34.903 "sequence_count": 2048, 00:04:34.903 "buf_count": 2048 00:04:34.903 } 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "bdev", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "bdev_set_options", 00:04:34.903 "params": { 00:04:34.903 "bdev_io_pool_size": 65535, 00:04:34.903 "bdev_io_cache_size": 256, 00:04:34.903 "bdev_auto_examine": true, 00:04:34.903 "iobuf_small_cache_size": 128, 00:04:34.903 "iobuf_large_cache_size": 16 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "bdev_raid_set_options", 00:04:34.903 "params": { 00:04:34.903 "process_window_size_kb": 1024, 00:04:34.903 "process_max_bandwidth_mb_sec": 0 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "bdev_iscsi_set_options", 00:04:34.903 "params": { 00:04:34.903 "timeout_sec": 30 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "bdev_nvme_set_options", 00:04:34.903 "params": { 00:04:34.903 "action_on_timeout": "none", 00:04:34.903 "timeout_us": 0, 00:04:34.903 "timeout_admin_us": 0, 00:04:34.903 "keep_alive_timeout_ms": 10000, 00:04:34.903 "arbitration_burst": 0, 00:04:34.903 "low_priority_weight": 0, 00:04:34.903 "medium_priority_weight": 0, 00:04:34.903 "high_priority_weight": 0, 00:04:34.903 "nvme_adminq_poll_period_us": 10000, 00:04:34.903 "nvme_ioq_poll_period_us": 0, 00:04:34.903 "io_queue_requests": 0, 00:04:34.903 "delay_cmd_submit": true, 00:04:34.903 "transport_retry_count": 4, 00:04:34.903 "bdev_retry_count": 3, 00:04:34.903 "transport_ack_timeout": 0, 00:04:34.903 "ctrlr_loss_timeout_sec": 0, 00:04:34.903 "reconnect_delay_sec": 0, 00:04:34.903 "fast_io_fail_timeout_sec": 0, 00:04:34.903 "disable_auto_failback": false, 00:04:34.903 "generate_uuids": false, 00:04:34.903 "transport_tos": 0, 00:04:34.903 "nvme_error_stat": false, 00:04:34.903 "rdma_srq_size": 0, 00:04:34.903 "io_path_stat": false, 00:04:34.903 "allow_accel_sequence": false, 00:04:34.903 "rdma_max_cq_size": 0, 00:04:34.903 "rdma_cm_event_timeout_ms": 0, 00:04:34.903 "dhchap_digests": [ 00:04:34.903 "sha256", 00:04:34.903 "sha384", 00:04:34.903 "sha512" 00:04:34.903 ], 00:04:34.903 "dhchap_dhgroups": [ 00:04:34.903 "null", 00:04:34.903 "ffdhe2048", 00:04:34.903 "ffdhe3072", 00:04:34.903 "ffdhe4096", 00:04:34.903 "ffdhe6144", 00:04:34.903 "ffdhe8192" 00:04:34.903 ], 00:04:34.903 "rdma_umr_per_io": false 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "bdev_nvme_set_hotplug", 00:04:34.903 "params": { 00:04:34.903 "period_us": 100000, 00:04:34.903 "enable": false 00:04:34.903 } 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "method": "bdev_wait_for_examine" 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "scsi", 00:04:34.903 "config": null 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "scheduler", 00:04:34.903 "config": [ 00:04:34.903 { 00:04:34.903 "method": "framework_set_scheduler", 00:04:34.903 "params": { 00:04:34.903 "name": "static" 00:04:34.903 } 00:04:34.903 } 00:04:34.903 ] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "vhost_scsi", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "vhost_blk", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "ublk", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "nbd", 00:04:34.903 "config": [] 00:04:34.903 }, 00:04:34.903 { 00:04:34.903 "subsystem": "nvmf", 00:04:34.903 "config": [ 00:04:34.904 { 00:04:34.904 "method": "nvmf_set_config", 00:04:34.904 "params": { 00:04:34.904 "discovery_filter": "match_any", 00:04:34.904 "admin_cmd_passthru": { 00:04:34.904 "identify_ctrlr": false 00:04:34.904 }, 00:04:34.904 "dhchap_digests": [ 00:04:34.904 "sha256", 00:04:34.904 "sha384", 00:04:34.904 "sha512" 00:04:34.904 ], 00:04:34.904 "dhchap_dhgroups": [ 00:04:34.904 "null", 00:04:34.904 "ffdhe2048", 00:04:34.904 "ffdhe3072", 00:04:34.904 "ffdhe4096", 00:04:34.904 "ffdhe6144", 00:04:34.904 "ffdhe8192" 00:04:34.904 ] 00:04:34.904 } 00:04:34.904 }, 00:04:34.904 { 00:04:34.904 "method": "nvmf_set_max_subsystems", 00:04:34.904 "params": { 00:04:34.904 "max_subsystems": 1024 00:04:34.904 } 00:04:34.904 }, 00:04:34.904 { 00:04:34.904 "method": "nvmf_set_crdt", 00:04:34.904 "params": { 00:04:34.904 "crdt1": 0, 00:04:34.904 "crdt2": 0, 00:04:34.904 "crdt3": 0 00:04:34.904 } 00:04:34.904 }, 00:04:34.904 { 00:04:34.904 "method": "nvmf_create_transport", 00:04:34.904 "params": { 00:04:34.904 "trtype": "TCP", 00:04:34.904 "max_queue_depth": 128, 00:04:34.904 "max_io_qpairs_per_ctrlr": 127, 00:04:34.904 "in_capsule_data_size": 4096, 00:04:34.904 "max_io_size": 131072, 00:04:34.904 "io_unit_size": 131072, 00:04:34.904 "max_aq_depth": 128, 00:04:34.904 "num_shared_buffers": 511, 00:04:34.904 "buf_cache_size": 4294967295, 00:04:34.904 "dif_insert_or_strip": false, 00:04:34.904 "zcopy": false, 00:04:34.904 "c2h_success": true, 00:04:34.904 "sock_priority": 0, 00:04:34.904 "abort_timeout_sec": 1, 00:04:34.904 "ack_timeout": 0, 00:04:34.904 "data_wr_pool_size": 0 00:04:34.904 } 00:04:34.904 } 00:04:34.904 ] 00:04:34.904 }, 00:04:34.904 { 00:04:34.904 "subsystem": "iscsi", 00:04:34.904 "config": [ 00:04:34.904 { 00:04:34.904 "method": "iscsi_set_options", 00:04:34.904 "params": { 00:04:34.904 "node_base": "iqn.2016-06.io.spdk", 00:04:34.904 "max_sessions": 128, 00:04:34.904 "max_connections_per_session": 2, 00:04:34.904 "max_queue_depth": 64, 00:04:34.904 "default_time2wait": 2, 00:04:34.904 "default_time2retain": 20, 00:04:34.904 "first_burst_length": 8192, 00:04:34.904 "immediate_data": true, 00:04:34.904 "allow_duplicated_isid": false, 00:04:34.904 "error_recovery_level": 0, 00:04:34.904 "nop_timeout": 60, 00:04:34.904 "nop_in_interval": 30, 00:04:34.904 "disable_chap": false, 00:04:34.904 "require_chap": false, 00:04:34.904 "mutual_chap": false, 00:04:34.904 "chap_group": 0, 00:04:34.904 "max_large_datain_per_connection": 64, 00:04:34.904 "max_r2t_per_connection": 4, 00:04:34.904 "pdu_pool_size": 36864, 00:04:34.904 "immediate_data_pool_size": 16384, 00:04:34.904 "data_out_pool_size": 2048 00:04:34.904 } 00:04:34.904 } 00:04:34.904 ] 00:04:34.904 } 00:04:34.904 ] 00:04:34.904 } 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105653 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105653 ']' 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105653 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105653 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105653' 00:04:34.904 killing process with pid 105653 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105653 00:04:34.904 22:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105653 00:04:35.164 22:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105732 00:04:35.164 22:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.164 22:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105732 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105732 ']' 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105732 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105732 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105732' 00:04:40.441 killing process with pid 105732 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105732 00:04:40.441 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105732 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.702 00:04:40.702 real 0m6.245s 00:04:40.702 user 0m5.958s 00:04:40.702 sys 0m0.585s 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.702 ************************************ 00:04:40.702 END TEST skip_rpc_with_json 00:04:40.702 ************************************ 00:04:40.702 22:14:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.702 ************************************ 00:04:40.702 START TEST skip_rpc_with_delay 00:04:40.702 ************************************ 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.702 [2024-12-14 22:14:01.498972] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.702 00:04:40.702 real 0m0.068s 00:04:40.702 user 0m0.042s 00:04:40.702 sys 0m0.025s 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.702 22:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.702 ************************************ 00:04:40.702 END TEST skip_rpc_with_delay 00:04:40.702 ************************************ 00:04:40.702 22:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.702 22:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.702 22:14:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.702 22:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.702 ************************************ 00:04:40.702 START TEST exit_on_failed_rpc_init 00:04:40.702 ************************************ 00:04:40.702 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106779 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106779 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 106779 ']' 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.962 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.962 [2024-12-14 22:14:01.634204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:40.962 [2024-12-14 22:14:01.634242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106779 ] 00:04:40.962 [2024-12-14 22:14:01.706505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.962 [2024-12-14 22:14:01.729215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.223 22:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.223 [2024-12-14 22:14:01.985125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:41.223 [2024-12-14 22:14:01.985172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106840 ] 00:04:41.223 [2024-12-14 22:14:02.055424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.223 [2024-12-14 22:14:02.077465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.223 [2024-12-14 22:14:02.077518] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.223 [2024-12-14 22:14:02.077527] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.223 [2024-12-14 22:14:02.077532] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106779 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 106779 ']' 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 106779 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106779 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106779' 00:04:41.483 killing process with pid 106779 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 106779 00:04:41.483 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 106779 00:04:41.743 00:04:41.743 real 0m0.861s 00:04:41.743 user 0m0.888s 00:04:41.743 sys 0m0.376s 00:04:41.743 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.743 22:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.743 ************************************ 00:04:41.743 END TEST exit_on_failed_rpc_init 00:04:41.743 ************************************ 00:04:41.743 22:14:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.743 00:04:41.743 real 0m12.988s 00:04:41.743 user 0m12.209s 00:04:41.743 sys 0m1.549s 00:04:41.743 22:14:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.743 22:14:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.743 ************************************ 00:04:41.743 END TEST skip_rpc 00:04:41.743 ************************************ 00:04:41.743 22:14:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.743 22:14:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.743 22:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.743 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:04:41.743 ************************************ 00:04:41.743 START TEST rpc_client 00:04:41.743 ************************************ 00:04:41.743 22:14:02 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.005 * Looking for test storage... 00:04:42.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.005 22:14:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.005 --rc genhtml_branch_coverage=1 00:04:42.005 --rc genhtml_function_coverage=1 00:04:42.005 --rc genhtml_legend=1 00:04:42.005 --rc geninfo_all_blocks=1 00:04:42.005 --rc geninfo_unexecuted_blocks=1 00:04:42.005 00:04:42.005 ' 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.005 --rc genhtml_branch_coverage=1 00:04:42.005 --rc genhtml_function_coverage=1 00:04:42.005 --rc genhtml_legend=1 00:04:42.005 --rc geninfo_all_blocks=1 00:04:42.005 --rc geninfo_unexecuted_blocks=1 00:04:42.005 00:04:42.005 ' 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.005 --rc genhtml_branch_coverage=1 00:04:42.005 --rc genhtml_function_coverage=1 00:04:42.005 --rc genhtml_legend=1 00:04:42.005 --rc geninfo_all_blocks=1 00:04:42.005 --rc geninfo_unexecuted_blocks=1 00:04:42.005 00:04:42.005 ' 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.005 --rc genhtml_branch_coverage=1 00:04:42.005 --rc genhtml_function_coverage=1 00:04:42.005 --rc genhtml_legend=1 00:04:42.005 --rc geninfo_all_blocks=1 00:04:42.005 --rc geninfo_unexecuted_blocks=1 00:04:42.005 00:04:42.005 ' 00:04:42.005 22:14:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:42.005 OK 00:04:42.005 22:14:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.005 00:04:42.005 real 0m0.197s 00:04:42.005 user 0m0.120s 00:04:42.005 sys 0m0.091s 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.005 22:14:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.005 ************************************ 00:04:42.005 END TEST rpc_client 00:04:42.005 ************************************ 00:04:42.005 22:14:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.005 22:14:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.005 22:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.005 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:04:42.005 ************************************ 00:04:42.005 START TEST json_config 00:04:42.005 ************************************ 00:04:42.005 22:14:02 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.005 22:14:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.005 22:14:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.005 22:14:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.266 22:14:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.266 22:14:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.266 22:14:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.266 22:14:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.266 22:14:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.266 22:14:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.266 22:14:02 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.266 22:14:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.266 22:14:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.266 22:14:02 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.266 22:14:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@353 -- # local d=2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.266 22:14:02 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.266 22:14:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.266 22:14:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.266 22:14:02 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.266 --rc genhtml_branch_coverage=1 00:04:42.266 --rc genhtml_function_coverage=1 00:04:42.266 --rc genhtml_legend=1 00:04:42.266 --rc geninfo_all_blocks=1 00:04:42.266 --rc geninfo_unexecuted_blocks=1 00:04:42.266 00:04:42.266 ' 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.266 --rc genhtml_branch_coverage=1 00:04:42.266 --rc genhtml_function_coverage=1 00:04:42.266 --rc genhtml_legend=1 00:04:42.266 --rc geninfo_all_blocks=1 00:04:42.266 --rc geninfo_unexecuted_blocks=1 00:04:42.266 00:04:42.266 ' 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.266 --rc genhtml_branch_coverage=1 00:04:42.266 --rc genhtml_function_coverage=1 00:04:42.266 --rc genhtml_legend=1 00:04:42.266 --rc geninfo_all_blocks=1 00:04:42.266 --rc geninfo_unexecuted_blocks=1 00:04:42.266 00:04:42.266 ' 00:04:42.266 22:14:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.266 --rc genhtml_branch_coverage=1 00:04:42.266 --rc genhtml_function_coverage=1 00:04:42.266 --rc genhtml_legend=1 00:04:42.266 --rc geninfo_all_blocks=1 00:04:42.266 --rc geninfo_unexecuted_blocks=1 00:04:42.266 00:04:42.266 ' 00:04:42.266 22:14:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.266 22:14:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.266 22:14:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.266 22:14:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.266 22:14:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.266 22:14:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.266 22:14:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.266 22:14:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.266 22:14:02 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.266 22:14:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.266 22:14:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.266 22:14:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:42.267 INFO: JSON configuration test init 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 22:14:03 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.267 22:14:03 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.267 22:14:03 json_config -- json_config/common.sh@10 -- # shift 00:04:42.267 22:14:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.267 22:14:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.267 22:14:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.267 22:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.267 22:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.267 22:14:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107186 00:04:42.267 22:14:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.267 Waiting for target to run... 00:04:42.267 22:14:03 json_config -- json_config/common.sh@25 -- # waitforlisten 107186 /var/tmp/spdk_tgt.sock 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@835 -- # '[' -z 107186 ']' 00:04:42.267 22:14:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.267 22:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 [2024-12-14 22:14:03.069018] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:42.267 [2024-12-14 22:14:03.069062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107186 ] 00:04:42.526 [2024-12-14 22:14:03.351325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.526 [2024-12-14 22:14:03.363886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.096 22:14:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.096 22:14:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:43.096 22:14:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.096 00:04:43.096 22:14:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:43.096 22:14:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:43.096 22:14:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.096 22:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.096 22:14:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:43.096 22:14:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:43.096 22:14:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.097 22:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 22:14:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.097 22:14:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:43.097 22:14:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:46.400 22:14:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@54 -- # sort 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.400 22:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:46.400 22:14:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.400 22:14:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.660 MallocForNvmf0 00:04:46.660 22:14:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.660 22:14:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.920 MallocForNvmf1 00:04:46.920 22:14:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.920 22:14:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.179 [2024-12-14 22:14:07.813721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.179 22:14:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.179 22:14:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.179 22:14:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.179 22:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.439 22:14:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.439 22:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.698 22:14:08 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.698 22:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.698 [2024-12-14 22:14:08.547921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.698 22:14:08 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:47.698 22:14:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.698 22:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.957 22:14:08 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:47.957 22:14:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.957 22:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.957 22:14:08 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:47.957 22:14:08 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.957 22:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.957 MallocBdevForConfigChangeCheck 00:04:47.957 22:14:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:47.957 22:14:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.957 22:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.217 22:14:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:48.217 22:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.475 22:14:09 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:48.475 INFO: shutting down applications... 00:04:48.475 22:14:09 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:48.476 22:14:09 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:48.476 22:14:09 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:48.476 22:14:09 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.381 Calling clear_iscsi_subsystem 00:04:50.381 Calling clear_nvmf_subsystem 00:04:50.381 Calling clear_nbd_subsystem 00:04:50.381 Calling clear_ublk_subsystem 00:04:50.381 Calling clear_vhost_blk_subsystem 00:04:50.381 Calling clear_vhost_scsi_subsystem 00:04:50.381 Calling clear_bdev_subsystem 00:04:50.381 22:14:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:50.381 22:14:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:50.381 22:14:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:50.381 22:14:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.382 22:14:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:50.382 22:14:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.382 22:14:11 json_config -- json_config/json_config.sh@352 -- # break 00:04:50.382 22:14:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:50.382 22:14:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:50.382 22:14:11 json_config -- json_config/common.sh@31 -- # local app=target 00:04:50.382 22:14:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.382 22:14:11 json_config -- json_config/common.sh@35 -- # [[ -n 107186 ]] 00:04:50.382 22:14:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107186 00:04:50.382 22:14:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.382 22:14:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.382 22:14:11 json_config -- json_config/common.sh@41 -- # kill -0 107186 00:04:50.382 22:14:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.951 22:14:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.951 22:14:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.951 22:14:11 json_config -- json_config/common.sh@41 -- # kill -0 107186 00:04:50.951 22:14:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.951 22:14:11 json_config -- json_config/common.sh@43 -- # break 00:04:50.951 22:14:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.951 22:14:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.951 SPDK target shutdown done 00:04:50.951 22:14:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:50.951 INFO: relaunching applications... 00:04:50.951 22:14:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.951 22:14:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.951 22:14:11 json_config -- json_config/common.sh@10 -- # shift 00:04:50.951 22:14:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.951 22:14:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.951 22:14:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.951 22:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.951 22:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.951 22:14:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108661 00:04:50.951 22:14:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.951 Waiting for target to run... 00:04:50.951 22:14:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.951 22:14:11 json_config -- json_config/common.sh@25 -- # waitforlisten 108661 /var/tmp/spdk_tgt.sock 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 108661 ']' 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.951 22:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.951 [2024-12-14 22:14:11.692669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:50.951 [2024-12-14 22:14:11.692737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108661 ] 00:04:51.520 [2024-12-14 22:14:12.158388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.520 [2024-12-14 22:14:12.178680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.811 [2024-12-14 22:14:15.183651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.811 [2024-12-14 22:14:15.215915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.070 22:14:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.070 22:14:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:55.070 22:14:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.070 00:04:55.070 22:14:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:55.070 22:14:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.070 INFO: Checking if target configuration is the same... 00:04:55.070 22:14:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.070 22:14:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:55.070 22:14:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.070 + '[' 2 -ne 2 ']' 00:04:55.070 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.070 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.070 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.070 +++ basename /dev/fd/62 00:04:55.070 ++ mktemp /tmp/62.XXX 00:04:55.070 + tmp_file_1=/tmp/62.26B 00:04:55.070 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.070 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.070 + tmp_file_2=/tmp/spdk_tgt_config.json.PJl 00:04:55.070 + ret=0 00:04:55.070 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.637 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.637 + diff -u /tmp/62.26B /tmp/spdk_tgt_config.json.PJl 00:04:55.637 + echo 'INFO: JSON config files are the same' 00:04:55.637 INFO: JSON config files are the same 00:04:55.637 + rm /tmp/62.26B /tmp/spdk_tgt_config.json.PJl 00:04:55.637 + exit 0 00:04:55.637 22:14:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:55.637 22:14:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.637 INFO: changing configuration and checking if this can be detected... 00:04:55.637 22:14:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.637 22:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.896 22:14:16 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.896 22:14:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:55.896 22:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.896 + '[' 2 -ne 2 ']' 00:04:55.896 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.896 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.896 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.896 +++ basename /dev/fd/62 00:04:55.896 ++ mktemp /tmp/62.XXX 00:04:55.896 + tmp_file_1=/tmp/62.Yno 00:04:55.896 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.896 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.896 + tmp_file_2=/tmp/spdk_tgt_config.json.9fb 00:04:55.896 + ret=0 00:04:55.896 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.154 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.154 + diff -u /tmp/62.Yno /tmp/spdk_tgt_config.json.9fb 00:04:56.154 + ret=1 00:04:56.154 + echo '=== Start of file: /tmp/62.Yno ===' 00:04:56.154 + cat /tmp/62.Yno 00:04:56.154 + echo '=== End of file: /tmp/62.Yno ===' 00:04:56.154 + echo '' 00:04:56.154 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9fb ===' 00:04:56.154 + cat /tmp/spdk_tgt_config.json.9fb 00:04:56.154 + echo '=== End of file: /tmp/spdk_tgt_config.json.9fb ===' 00:04:56.154 + echo '' 00:04:56.154 + rm /tmp/62.Yno /tmp/spdk_tgt_config.json.9fb 00:04:56.154 + exit 1 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:56.154 INFO: configuration change detected. 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:56.154 22:14:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.154 22:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 108661 ]] 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:56.154 22:14:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.155 22:14:16 json_config -- json_config/json_config.sh@330 -- # killprocess 108661 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 108661 ']' 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@958 -- # kill -0 108661 00:04:56.155 22:14:16 json_config -- common/autotest_common.sh@959 -- # uname 00:04:56.155 22:14:17 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.155 22:14:17 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108661 00:04:56.413 22:14:17 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.413 22:14:17 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.413 22:14:17 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108661' 00:04:56.413 killing process with pid 108661 00:04:56.413 22:14:17 json_config -- common/autotest_common.sh@973 -- # kill 108661 00:04:56.413 22:14:17 json_config -- common/autotest_common.sh@978 -- # wait 108661 00:04:57.790 22:14:18 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.790 22:14:18 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:57.790 22:14:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.790 22:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.790 22:14:18 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:57.790 22:14:18 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:57.790 INFO: Success 00:04:57.790 00:04:57.790 real 0m15.774s 00:04:57.790 user 0m16.976s 00:04:57.790 sys 0m1.928s 00:04:57.790 22:14:18 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.790 22:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.790 ************************************ 00:04:57.790 END TEST json_config 00:04:57.790 ************************************ 00:04:57.790 22:14:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:57.790 22:14:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.790 22:14:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.790 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.790 ************************************ 00:04:57.790 START TEST json_config_extra_key 00:04:57.790 ************************************ 00:04:57.790 22:14:18 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.051 --rc genhtml_branch_coverage=1 00:04:58.051 --rc genhtml_function_coverage=1 00:04:58.051 --rc genhtml_legend=1 00:04:58.051 --rc geninfo_all_blocks=1 00:04:58.051 --rc geninfo_unexecuted_blocks=1 00:04:58.051 00:04:58.051 ' 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.051 --rc genhtml_branch_coverage=1 00:04:58.051 --rc genhtml_function_coverage=1 00:04:58.051 --rc genhtml_legend=1 00:04:58.051 --rc geninfo_all_blocks=1 00:04:58.051 --rc geninfo_unexecuted_blocks=1 00:04:58.051 00:04:58.051 ' 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.051 --rc genhtml_branch_coverage=1 00:04:58.051 --rc genhtml_function_coverage=1 00:04:58.051 --rc genhtml_legend=1 00:04:58.051 --rc geninfo_all_blocks=1 00:04:58.051 --rc geninfo_unexecuted_blocks=1 00:04:58.051 00:04:58.051 ' 00:04:58.051 22:14:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.051 --rc genhtml_branch_coverage=1 00:04:58.051 --rc genhtml_function_coverage=1 00:04:58.051 --rc genhtml_legend=1 00:04:58.051 --rc geninfo_all_blocks=1 00:04:58.051 --rc geninfo_unexecuted_blocks=1 00:04:58.051 00:04:58.051 ' 00:04:58.051 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.051 22:14:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.051 22:14:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.051 22:14:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.051 22:14:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.052 22:14:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.052 22:14:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.052 22:14:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.052 22:14:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.052 INFO: launching applications... 00:04:58.052 22:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=109920 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.052 Waiting for target to run... 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 109920 /var/tmp/spdk_tgt.sock 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 109920 ']' 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.052 22:14:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.052 22:14:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.052 [2024-12-14 22:14:18.898953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:58.052 [2024-12-14 22:14:18.899004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109920 ] 00:04:58.312 [2024-12-14 22:14:19.182638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.312 [2024-12-14 22:14:19.195863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.880 22:14:19 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.880 22:14:19 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.880 00:04:58.880 22:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.880 INFO: shutting down applications... 00:04:58.880 22:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 109920 ]] 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 109920 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109920 00:04:58.880 22:14:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109920 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.448 22:14:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.448 SPDK target shutdown done 00:04:59.448 22:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.448 Success 00:04:59.448 00:04:59.448 real 0m1.576s 00:04:59.448 user 0m1.344s 00:04:59.448 sys 0m0.406s 00:04:59.448 22:14:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.448 22:14:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.448 ************************************ 00:04:59.448 END TEST json_config_extra_key 00:04:59.448 ************************************ 00:04:59.448 22:14:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.448 22:14:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.448 22:14:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.448 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:04:59.448 ************************************ 00:04:59.448 START TEST alias_rpc 00:04:59.448 ************************************ 00:04:59.448 22:14:20 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.707 * Looking for test storage... 00:04:59.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:59.707 22:14:20 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.707 22:14:20 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.707 22:14:20 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.707 22:14:20 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.707 22:14:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.708 22:14:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.708 22:14:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.708 22:14:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.708 22:14:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.708 22:14:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.708 --rc genhtml_branch_coverage=1 00:04:59.708 --rc genhtml_function_coverage=1 00:04:59.708 --rc genhtml_legend=1 00:04:59.708 --rc geninfo_all_blocks=1 00:04:59.708 --rc geninfo_unexecuted_blocks=1 00:04:59.708 00:04:59.708 ' 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.708 --rc genhtml_branch_coverage=1 00:04:59.708 --rc genhtml_function_coverage=1 00:04:59.708 --rc genhtml_legend=1 00:04:59.708 --rc geninfo_all_blocks=1 00:04:59.708 --rc geninfo_unexecuted_blocks=1 00:04:59.708 00:04:59.708 ' 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.708 --rc genhtml_branch_coverage=1 00:04:59.708 --rc genhtml_function_coverage=1 00:04:59.708 --rc genhtml_legend=1 00:04:59.708 --rc geninfo_all_blocks=1 00:04:59.708 --rc geninfo_unexecuted_blocks=1 00:04:59.708 00:04:59.708 ' 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.708 --rc genhtml_branch_coverage=1 00:04:59.708 --rc genhtml_function_coverage=1 00:04:59.708 --rc genhtml_legend=1 00:04:59.708 --rc geninfo_all_blocks=1 00:04:59.708 --rc geninfo_unexecuted_blocks=1 00:04:59.708 00:04:59.708 ' 00:04:59.708 22:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.708 22:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110374 00:04:59.708 22:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110374 00:04:59.708 22:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110374 ']' 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.708 22:14:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.708 [2024-12-14 22:14:20.542848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:59.708 [2024-12-14 22:14:20.542897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110374 ] 00:04:59.967 [2024-12-14 22:14:20.617274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.967 [2024-12-14 22:14:20.639179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.967 22:14:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.967 22:14:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.967 22:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.226 22:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110374 00:05:00.226 22:14:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110374 ']' 00:05:00.226 22:14:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110374 00:05:00.226 22:14:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.226 22:14:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.226 22:14:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110374 00:05:00.485 22:14:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.485 22:14:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.485 22:14:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110374' 00:05:00.485 killing process with pid 110374 00:05:00.485 22:14:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 110374 00:05:00.485 22:14:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 110374 00:05:00.744 00:05:00.744 real 0m1.091s 00:05:00.744 user 0m1.107s 00:05:00.744 sys 0m0.419s 00:05:00.744 22:14:21 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.744 22:14:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.744 ************************************ 00:05:00.744 END TEST alias_rpc 00:05:00.744 ************************************ 00:05:00.744 22:14:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.744 22:14:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.744 22:14:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.744 22:14:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.744 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:00.744 ************************************ 00:05:00.744 START TEST spdkcli_tcp 00:05:00.744 ************************************ 00:05:00.744 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.744 * Looking for test storage... 00:05:00.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:00.744 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.744 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.744 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.744 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.744 22:14:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.745 22:14:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.004 22:14:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.004 --rc genhtml_branch_coverage=1 00:05:01.004 --rc genhtml_function_coverage=1 00:05:01.004 --rc genhtml_legend=1 00:05:01.004 --rc geninfo_all_blocks=1 00:05:01.004 --rc geninfo_unexecuted_blocks=1 00:05:01.004 00:05:01.004 ' 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.004 --rc genhtml_branch_coverage=1 00:05:01.004 --rc genhtml_function_coverage=1 00:05:01.004 --rc genhtml_legend=1 00:05:01.004 --rc geninfo_all_blocks=1 00:05:01.004 --rc geninfo_unexecuted_blocks=1 00:05:01.004 00:05:01.004 ' 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.004 --rc genhtml_branch_coverage=1 00:05:01.004 --rc genhtml_function_coverage=1 00:05:01.004 --rc genhtml_legend=1 00:05:01.004 --rc geninfo_all_blocks=1 00:05:01.004 --rc geninfo_unexecuted_blocks=1 00:05:01.004 00:05:01.004 ' 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.004 --rc genhtml_branch_coverage=1 00:05:01.004 --rc genhtml_function_coverage=1 00:05:01.004 --rc genhtml_legend=1 00:05:01.004 --rc geninfo_all_blocks=1 00:05:01.004 --rc geninfo_unexecuted_blocks=1 00:05:01.004 00:05:01.004 ' 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110534 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110534 00:05:01.004 22:14:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110534 ']' 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.004 22:14:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.004 [2024-12-14 22:14:21.701342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:01.004 [2024-12-14 22:14:21.701396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110534 ] 00:05:01.004 [2024-12-14 22:14:21.774730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.004 [2024-12-14 22:14:21.798106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.004 [2024-12-14 22:14:21.798107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.269 22:14:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.269 22:14:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.269 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110713 00:05:01.269 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.269 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.532 [ 00:05:01.532 "bdev_malloc_delete", 00:05:01.532 "bdev_malloc_create", 00:05:01.532 "bdev_null_resize", 00:05:01.532 "bdev_null_delete", 00:05:01.532 "bdev_null_create", 00:05:01.532 "bdev_nvme_cuse_unregister", 00:05:01.532 "bdev_nvme_cuse_register", 00:05:01.532 "bdev_opal_new_user", 00:05:01.532 "bdev_opal_set_lock_state", 00:05:01.532 "bdev_opal_delete", 00:05:01.532 "bdev_opal_get_info", 00:05:01.532 "bdev_opal_create", 00:05:01.532 "bdev_nvme_opal_revert", 00:05:01.532 "bdev_nvme_opal_init", 00:05:01.532 "bdev_nvme_send_cmd", 00:05:01.532 "bdev_nvme_set_keys", 00:05:01.532 "bdev_nvme_get_path_iostat", 00:05:01.532 "bdev_nvme_get_mdns_discovery_info", 00:05:01.532 "bdev_nvme_stop_mdns_discovery", 00:05:01.532 "bdev_nvme_start_mdns_discovery", 00:05:01.532 "bdev_nvme_set_multipath_policy", 00:05:01.532 "bdev_nvme_set_preferred_path", 00:05:01.532 "bdev_nvme_get_io_paths", 00:05:01.532 "bdev_nvme_remove_error_injection", 00:05:01.532 "bdev_nvme_add_error_injection", 00:05:01.532 "bdev_nvme_get_discovery_info", 00:05:01.532 "bdev_nvme_stop_discovery", 00:05:01.532 "bdev_nvme_start_discovery", 00:05:01.532 "bdev_nvme_get_controller_health_info", 00:05:01.532 "bdev_nvme_disable_controller", 00:05:01.532 "bdev_nvme_enable_controller", 00:05:01.532 "bdev_nvme_reset_controller", 00:05:01.532 "bdev_nvme_get_transport_statistics", 00:05:01.532 "bdev_nvme_apply_firmware", 00:05:01.532 "bdev_nvme_detach_controller", 00:05:01.532 "bdev_nvme_get_controllers", 00:05:01.532 "bdev_nvme_attach_controller", 00:05:01.532 "bdev_nvme_set_hotplug", 00:05:01.532 "bdev_nvme_set_options", 00:05:01.532 "bdev_passthru_delete", 00:05:01.532 "bdev_passthru_create", 00:05:01.532 "bdev_lvol_set_parent_bdev", 00:05:01.532 "bdev_lvol_set_parent", 00:05:01.532 "bdev_lvol_check_shallow_copy", 00:05:01.532 "bdev_lvol_start_shallow_copy", 00:05:01.532 "bdev_lvol_grow_lvstore", 00:05:01.532 "bdev_lvol_get_lvols", 00:05:01.532 "bdev_lvol_get_lvstores", 00:05:01.532 "bdev_lvol_delete", 00:05:01.532 "bdev_lvol_set_read_only", 00:05:01.532 "bdev_lvol_resize", 00:05:01.532 "bdev_lvol_decouple_parent", 00:05:01.532 "bdev_lvol_inflate", 00:05:01.532 "bdev_lvol_rename", 00:05:01.532 "bdev_lvol_clone_bdev", 00:05:01.532 "bdev_lvol_clone", 00:05:01.532 "bdev_lvol_snapshot", 00:05:01.532 "bdev_lvol_create", 00:05:01.532 "bdev_lvol_delete_lvstore", 00:05:01.532 "bdev_lvol_rename_lvstore", 00:05:01.532 "bdev_lvol_create_lvstore", 00:05:01.532 "bdev_raid_set_options", 00:05:01.532 "bdev_raid_remove_base_bdev", 00:05:01.532 "bdev_raid_add_base_bdev", 00:05:01.532 "bdev_raid_delete", 00:05:01.532 "bdev_raid_create", 00:05:01.532 "bdev_raid_get_bdevs", 00:05:01.532 "bdev_error_inject_error", 00:05:01.532 "bdev_error_delete", 00:05:01.532 "bdev_error_create", 00:05:01.532 "bdev_split_delete", 00:05:01.532 "bdev_split_create", 00:05:01.532 "bdev_delay_delete", 00:05:01.532 "bdev_delay_create", 00:05:01.532 "bdev_delay_update_latency", 00:05:01.532 "bdev_zone_block_delete", 00:05:01.532 "bdev_zone_block_create", 00:05:01.532 "blobfs_create", 00:05:01.532 "blobfs_detect", 00:05:01.532 "blobfs_set_cache_size", 00:05:01.532 "bdev_aio_delete", 00:05:01.532 "bdev_aio_rescan", 00:05:01.532 "bdev_aio_create", 00:05:01.532 "bdev_ftl_set_property", 00:05:01.532 "bdev_ftl_get_properties", 00:05:01.532 "bdev_ftl_get_stats", 00:05:01.532 "bdev_ftl_unmap", 00:05:01.532 "bdev_ftl_unload", 00:05:01.532 "bdev_ftl_delete", 00:05:01.532 "bdev_ftl_load", 00:05:01.532 "bdev_ftl_create", 00:05:01.532 "bdev_virtio_attach_controller", 00:05:01.532 "bdev_virtio_scsi_get_devices", 00:05:01.532 "bdev_virtio_detach_controller", 00:05:01.532 "bdev_virtio_blk_set_hotplug", 00:05:01.532 "bdev_iscsi_delete", 00:05:01.532 "bdev_iscsi_create", 00:05:01.532 "bdev_iscsi_set_options", 00:05:01.532 "accel_error_inject_error", 00:05:01.532 "ioat_scan_accel_module", 00:05:01.532 "dsa_scan_accel_module", 00:05:01.532 "iaa_scan_accel_module", 00:05:01.532 "vfu_virtio_create_fs_endpoint", 00:05:01.532 "vfu_virtio_create_scsi_endpoint", 00:05:01.532 "vfu_virtio_scsi_remove_target", 00:05:01.532 "vfu_virtio_scsi_add_target", 00:05:01.532 "vfu_virtio_create_blk_endpoint", 00:05:01.532 "vfu_virtio_delete_endpoint", 00:05:01.532 "keyring_file_remove_key", 00:05:01.532 "keyring_file_add_key", 00:05:01.532 "keyring_linux_set_options", 00:05:01.532 "fsdev_aio_delete", 00:05:01.532 "fsdev_aio_create", 00:05:01.532 "iscsi_get_histogram", 00:05:01.532 "iscsi_enable_histogram", 00:05:01.532 "iscsi_set_options", 00:05:01.532 "iscsi_get_auth_groups", 00:05:01.532 "iscsi_auth_group_remove_secret", 00:05:01.532 "iscsi_auth_group_add_secret", 00:05:01.532 "iscsi_delete_auth_group", 00:05:01.532 "iscsi_create_auth_group", 00:05:01.532 "iscsi_set_discovery_auth", 00:05:01.532 "iscsi_get_options", 00:05:01.532 "iscsi_target_node_request_logout", 00:05:01.532 "iscsi_target_node_set_redirect", 00:05:01.532 "iscsi_target_node_set_auth", 00:05:01.532 "iscsi_target_node_add_lun", 00:05:01.532 "iscsi_get_stats", 00:05:01.532 "iscsi_get_connections", 00:05:01.532 "iscsi_portal_group_set_auth", 00:05:01.532 "iscsi_start_portal_group", 00:05:01.532 "iscsi_delete_portal_group", 00:05:01.532 "iscsi_create_portal_group", 00:05:01.532 "iscsi_get_portal_groups", 00:05:01.532 "iscsi_delete_target_node", 00:05:01.532 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.532 "iscsi_target_node_add_pg_ig_maps", 00:05:01.532 "iscsi_create_target_node", 00:05:01.532 "iscsi_get_target_nodes", 00:05:01.532 "iscsi_delete_initiator_group", 00:05:01.532 "iscsi_initiator_group_remove_initiators", 00:05:01.532 "iscsi_initiator_group_add_initiators", 00:05:01.532 "iscsi_create_initiator_group", 00:05:01.532 "iscsi_get_initiator_groups", 00:05:01.532 "nvmf_set_crdt", 00:05:01.532 "nvmf_set_config", 00:05:01.532 "nvmf_set_max_subsystems", 00:05:01.532 "nvmf_stop_mdns_prr", 00:05:01.532 "nvmf_publish_mdns_prr", 00:05:01.532 "nvmf_subsystem_get_listeners", 00:05:01.532 "nvmf_subsystem_get_qpairs", 00:05:01.532 "nvmf_subsystem_get_controllers", 00:05:01.532 "nvmf_get_stats", 00:05:01.532 "nvmf_get_transports", 00:05:01.532 "nvmf_create_transport", 00:05:01.532 "nvmf_get_targets", 00:05:01.532 "nvmf_delete_target", 00:05:01.532 "nvmf_create_target", 00:05:01.532 "nvmf_subsystem_allow_any_host", 00:05:01.532 "nvmf_subsystem_set_keys", 00:05:01.532 "nvmf_subsystem_remove_host", 00:05:01.532 "nvmf_subsystem_add_host", 00:05:01.532 "nvmf_ns_remove_host", 00:05:01.532 "nvmf_ns_add_host", 00:05:01.532 "nvmf_subsystem_remove_ns", 00:05:01.532 "nvmf_subsystem_set_ns_ana_group", 00:05:01.532 "nvmf_subsystem_add_ns", 00:05:01.533 "nvmf_subsystem_listener_set_ana_state", 00:05:01.533 "nvmf_discovery_get_referrals", 00:05:01.533 "nvmf_discovery_remove_referral", 00:05:01.533 "nvmf_discovery_add_referral", 00:05:01.533 "nvmf_subsystem_remove_listener", 00:05:01.533 "nvmf_subsystem_add_listener", 00:05:01.533 "nvmf_delete_subsystem", 00:05:01.533 "nvmf_create_subsystem", 00:05:01.533 "nvmf_get_subsystems", 00:05:01.533 "env_dpdk_get_mem_stats", 00:05:01.533 "nbd_get_disks", 00:05:01.533 "nbd_stop_disk", 00:05:01.533 "nbd_start_disk", 00:05:01.533 "ublk_recover_disk", 00:05:01.533 "ublk_get_disks", 00:05:01.533 "ublk_stop_disk", 00:05:01.533 "ublk_start_disk", 00:05:01.533 "ublk_destroy_target", 00:05:01.533 "ublk_create_target", 00:05:01.533 "virtio_blk_create_transport", 00:05:01.533 "virtio_blk_get_transports", 00:05:01.533 "vhost_controller_set_coalescing", 00:05:01.533 "vhost_get_controllers", 00:05:01.533 "vhost_delete_controller", 00:05:01.533 "vhost_create_blk_controller", 00:05:01.533 "vhost_scsi_controller_remove_target", 00:05:01.533 "vhost_scsi_controller_add_target", 00:05:01.533 "vhost_start_scsi_controller", 00:05:01.533 "vhost_create_scsi_controller", 00:05:01.533 "thread_set_cpumask", 00:05:01.533 "scheduler_set_options", 00:05:01.533 "framework_get_governor", 00:05:01.533 "framework_get_scheduler", 00:05:01.533 "framework_set_scheduler", 00:05:01.533 "framework_get_reactors", 00:05:01.533 "thread_get_io_channels", 00:05:01.533 "thread_get_pollers", 00:05:01.533 "thread_get_stats", 00:05:01.533 "framework_monitor_context_switch", 00:05:01.533 "spdk_kill_instance", 00:05:01.533 "log_enable_timestamps", 00:05:01.533 "log_get_flags", 00:05:01.533 "log_clear_flag", 00:05:01.533 "log_set_flag", 00:05:01.533 "log_get_level", 00:05:01.533 "log_set_level", 00:05:01.533 "log_get_print_level", 00:05:01.533 "log_set_print_level", 00:05:01.533 "framework_enable_cpumask_locks", 00:05:01.533 "framework_disable_cpumask_locks", 00:05:01.533 "framework_wait_init", 00:05:01.533 "framework_start_init", 00:05:01.533 "scsi_get_devices", 00:05:01.533 "bdev_get_histogram", 00:05:01.533 "bdev_enable_histogram", 00:05:01.533 "bdev_set_qos_limit", 00:05:01.533 "bdev_set_qd_sampling_period", 00:05:01.533 "bdev_get_bdevs", 00:05:01.533 "bdev_reset_iostat", 00:05:01.533 "bdev_get_iostat", 00:05:01.533 "bdev_examine", 00:05:01.533 "bdev_wait_for_examine", 00:05:01.533 "bdev_set_options", 00:05:01.533 "accel_get_stats", 00:05:01.533 "accel_set_options", 00:05:01.533 "accel_set_driver", 00:05:01.533 "accel_crypto_key_destroy", 00:05:01.533 "accel_crypto_keys_get", 00:05:01.533 "accel_crypto_key_create", 00:05:01.533 "accel_assign_opc", 00:05:01.533 "accel_get_module_info", 00:05:01.533 "accel_get_opc_assignments", 00:05:01.533 "vmd_rescan", 00:05:01.533 "vmd_remove_device", 00:05:01.533 "vmd_enable", 00:05:01.533 "sock_get_default_impl", 00:05:01.533 "sock_set_default_impl", 00:05:01.533 "sock_impl_set_options", 00:05:01.533 "sock_impl_get_options", 00:05:01.533 "iobuf_get_stats", 00:05:01.533 "iobuf_set_options", 00:05:01.533 "keyring_get_keys", 00:05:01.533 "vfu_tgt_set_base_path", 00:05:01.533 "framework_get_pci_devices", 00:05:01.533 "framework_get_config", 00:05:01.533 "framework_get_subsystems", 00:05:01.533 "fsdev_set_opts", 00:05:01.533 "fsdev_get_opts", 00:05:01.533 "trace_get_info", 00:05:01.533 "trace_get_tpoint_group_mask", 00:05:01.533 "trace_disable_tpoint_group", 00:05:01.533 "trace_enable_tpoint_group", 00:05:01.533 "trace_clear_tpoint_mask", 00:05:01.533 "trace_set_tpoint_mask", 00:05:01.533 "notify_get_notifications", 00:05:01.533 "notify_get_types", 00:05:01.533 "spdk_get_version", 00:05:01.533 "rpc_get_methods" 00:05:01.533 ] 00:05:01.533 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.533 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.533 22:14:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110534 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110534 ']' 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110534 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110534 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110534' 00:05:01.533 killing process with pid 110534 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110534 00:05:01.533 22:14:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110534 00:05:01.792 00:05:01.792 real 0m1.097s 00:05:01.792 user 0m1.872s 00:05:01.792 sys 0m0.429s 00:05:01.792 22:14:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.792 22:14:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.792 ************************************ 00:05:01.792 END TEST spdkcli_tcp 00:05:01.792 ************************************ 00:05:01.792 22:14:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.792 22:14:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.792 22:14:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.792 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:01.792 ************************************ 00:05:01.792 START TEST dpdk_mem_utility 00:05:01.792 ************************************ 00:05:01.792 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.051 * Looking for test storage... 00:05:02.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.051 22:14:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.051 --rc genhtml_branch_coverage=1 00:05:02.051 --rc genhtml_function_coverage=1 00:05:02.051 --rc genhtml_legend=1 00:05:02.051 --rc geninfo_all_blocks=1 00:05:02.051 --rc geninfo_unexecuted_blocks=1 00:05:02.051 00:05:02.051 ' 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.051 --rc genhtml_branch_coverage=1 00:05:02.051 --rc genhtml_function_coverage=1 00:05:02.051 --rc genhtml_legend=1 00:05:02.051 --rc geninfo_all_blocks=1 00:05:02.051 --rc geninfo_unexecuted_blocks=1 00:05:02.051 00:05:02.051 ' 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.051 --rc genhtml_branch_coverage=1 00:05:02.051 --rc genhtml_function_coverage=1 00:05:02.051 --rc genhtml_legend=1 00:05:02.051 --rc geninfo_all_blocks=1 00:05:02.051 --rc geninfo_unexecuted_blocks=1 00:05:02.051 00:05:02.051 ' 00:05:02.051 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.051 --rc genhtml_branch_coverage=1 00:05:02.051 --rc genhtml_function_coverage=1 00:05:02.051 --rc genhtml_legend=1 00:05:02.051 --rc geninfo_all_blocks=1 00:05:02.051 --rc geninfo_unexecuted_blocks=1 00:05:02.051 00:05:02.051 ' 00:05:02.051 22:14:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.052 22:14:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110796 00:05:02.052 22:14:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110796 00:05:02.052 22:14:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 110796 ']' 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.052 22:14:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.052 [2024-12-14 22:14:22.864484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:02.052 [2024-12-14 22:14:22.864532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110796 ] 00:05:02.311 [2024-12-14 22:14:22.936724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.311 [2024-12-14 22:14:22.959063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.311 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.311 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:02.311 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.311 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.311 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.311 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.311 { 00:05:02.311 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.311 } 00:05:02.311 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.311 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.570 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:02.570 1 heaps totaling size 818.000000 MiB 00:05:02.570 size: 818.000000 MiB heap id: 0 00:05:02.570 end heaps---------- 00:05:02.570 9 mempools totaling size 603.782043 MiB 00:05:02.570 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.570 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.570 size: 100.555481 MiB name: bdev_io_110796 00:05:02.570 size: 50.003479 MiB name: msgpool_110796 00:05:02.570 size: 36.509338 MiB name: fsdev_io_110796 00:05:02.570 size: 21.763794 MiB name: PDU_Pool 00:05:02.570 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.570 size: 4.133484 MiB name: evtpool_110796 00:05:02.571 size: 0.026123 MiB name: Session_Pool 00:05:02.571 end mempools------- 00:05:02.571 6 memzones totaling size 4.142822 MiB 00:05:02.571 size: 1.000366 MiB name: RG_ring_0_110796 00:05:02.571 size: 1.000366 MiB name: RG_ring_1_110796 00:05:02.571 size: 1.000366 MiB name: RG_ring_4_110796 00:05:02.571 size: 1.000366 MiB name: RG_ring_5_110796 00:05:02.571 size: 0.125366 MiB name: RG_ring_2_110796 00:05:02.571 size: 0.015991 MiB name: RG_ring_3_110796 00:05:02.571 end memzones------- 00:05:02.571 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.571 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:02.571 list of free elements. size: 10.852478 MiB 00:05:02.571 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:02.571 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:02.571 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:02.571 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:02.571 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:02.571 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:02.571 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:02.571 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:02.571 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:02.571 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:02.571 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:02.571 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:02.571 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:02.571 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:02.571 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:02.571 list of standard malloc elements. size: 199.218628 MiB 00:05:02.571 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:02.571 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:02.571 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:02.571 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:02.571 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:02.571 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:02.571 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:02.571 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:02.571 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:02.571 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:02.571 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:02.571 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:02.571 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:02.571 list of memzone associated elements. size: 607.928894 MiB 00:05:02.571 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:02.571 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.571 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:02.571 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.571 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:02.571 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_110796_0 00:05:02.571 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:02.571 associated memzone info: size: 48.002930 MiB name: MP_msgpool_110796_0 00:05:02.571 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:02.571 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_110796_0 00:05:02.571 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:02.571 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.571 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:02.571 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.571 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:02.571 associated memzone info: size: 3.000122 MiB name: MP_evtpool_110796_0 00:05:02.571 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:02.571 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_110796 00:05:02.571 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:02.571 associated memzone info: size: 1.007996 MiB name: MP_evtpool_110796 00:05:02.571 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:02.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.571 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:02.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.571 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:02.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.571 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:02.571 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.571 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:02.571 associated memzone info: size: 1.000366 MiB name: RG_ring_0_110796 00:05:02.571 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:02.571 associated memzone info: size: 1.000366 MiB name: RG_ring_1_110796 00:05:02.571 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:02.571 associated memzone info: size: 1.000366 MiB name: RG_ring_4_110796 00:05:02.571 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:02.571 associated memzone info: size: 1.000366 MiB name: RG_ring_5_110796 00:05:02.571 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:02.571 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_110796 00:05:02.571 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:02.571 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_110796 00:05:02.571 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:02.571 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.571 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:02.571 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.571 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:02.571 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.571 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:02.571 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_110796 00:05:02.571 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:02.571 associated memzone info: size: 0.125366 MiB name: RG_ring_2_110796 00:05:02.571 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:02.571 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.571 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:02.571 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.571 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:02.571 associated memzone info: size: 0.015991 MiB name: RG_ring_3_110796 00:05:02.571 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:02.571 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.571 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:02.571 associated memzone info: size: 0.000183 MiB name: MP_msgpool_110796 00:05:02.571 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:02.571 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_110796 00:05:02.571 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:02.571 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_110796 00:05:02.571 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:02.571 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.571 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.571 22:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110796 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 110796 ']' 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 110796 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110796 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110796' 00:05:02.571 killing process with pid 110796 00:05:02.571 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 110796 00:05:02.572 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 110796 00:05:02.832 00:05:02.832 real 0m0.981s 00:05:02.832 user 0m0.935s 00:05:02.832 sys 0m0.397s 00:05:02.832 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.832 22:14:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.832 ************************************ 00:05:02.832 END TEST dpdk_mem_utility 00:05:02.832 ************************************ 00:05:02.832 22:14:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.832 22:14:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.832 22:14:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.832 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:02.832 ************************************ 00:05:02.832 START TEST event 00:05:02.832 ************************************ 00:05:02.832 22:14:23 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.092 * Looking for test storage... 00:05:03.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.092 22:14:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.092 22:14:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.092 22:14:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.092 22:14:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.092 22:14:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.092 22:14:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.092 22:14:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.092 22:14:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.092 22:14:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.092 22:14:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.092 22:14:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.092 22:14:23 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.092 22:14:23 event -- scripts/common.sh@345 -- # : 1 00:05:03.092 22:14:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.092 22:14:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.092 22:14:23 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.092 22:14:23 event -- scripts/common.sh@353 -- # local d=1 00:05:03.092 22:14:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.092 22:14:23 event -- scripts/common.sh@355 -- # echo 1 00:05:03.092 22:14:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.092 22:14:23 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.092 22:14:23 event -- scripts/common.sh@353 -- # local d=2 00:05:03.092 22:14:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.092 22:14:23 event -- scripts/common.sh@355 -- # echo 2 00:05:03.092 22:14:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.092 22:14:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.092 22:14:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.092 22:14:23 event -- scripts/common.sh@368 -- # return 0 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.092 --rc genhtml_branch_coverage=1 00:05:03.092 --rc genhtml_function_coverage=1 00:05:03.092 --rc genhtml_legend=1 00:05:03.092 --rc geninfo_all_blocks=1 00:05:03.092 --rc geninfo_unexecuted_blocks=1 00:05:03.092 00:05:03.092 ' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.092 --rc genhtml_branch_coverage=1 00:05:03.092 --rc genhtml_function_coverage=1 00:05:03.092 --rc genhtml_legend=1 00:05:03.092 --rc geninfo_all_blocks=1 00:05:03.092 --rc geninfo_unexecuted_blocks=1 00:05:03.092 00:05:03.092 ' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.092 --rc genhtml_branch_coverage=1 00:05:03.092 --rc genhtml_function_coverage=1 00:05:03.092 --rc genhtml_legend=1 00:05:03.092 --rc geninfo_all_blocks=1 00:05:03.092 --rc geninfo_unexecuted_blocks=1 00:05:03.092 00:05:03.092 ' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.092 --rc genhtml_branch_coverage=1 00:05:03.092 --rc genhtml_function_coverage=1 00:05:03.092 --rc genhtml_legend=1 00:05:03.092 --rc geninfo_all_blocks=1 00:05:03.092 --rc geninfo_unexecuted_blocks=1 00:05:03.092 00:05:03.092 ' 00:05:03.092 22:14:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:03.092 22:14:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.092 22:14:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.092 22:14:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.092 22:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.092 ************************************ 00:05:03.092 START TEST event_perf 00:05:03.092 ************************************ 00:05:03.092 22:14:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.092 Running I/O for 1 seconds...[2024-12-14 22:14:23.914815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:03.092 [2024-12-14 22:14:23.914890] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111080 ] 00:05:03.351 [2024-12-14 22:14:23.995308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.351 [2024-12-14 22:14:24.020946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.351 [2024-12-14 22:14:24.020999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.351 [2024-12-14 22:14:24.021105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.351 [2024-12-14 22:14:24.021106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.290 Running I/O for 1 seconds... 00:05:04.290 lcore 0: 202436 00:05:04.290 lcore 1: 202434 00:05:04.290 lcore 2: 202434 00:05:04.290 lcore 3: 202434 00:05:04.290 done. 00:05:04.290 00:05:04.290 real 0m1.161s 00:05:04.290 user 0m4.067s 00:05:04.290 sys 0m0.090s 00:05:04.290 22:14:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.290 22:14:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.290 ************************************ 00:05:04.290 END TEST event_perf 00:05:04.290 ************************************ 00:05:04.290 22:14:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.290 22:14:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.290 22:14:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.290 22:14:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.290 ************************************ 00:05:04.290 START TEST event_reactor 00:05:04.290 ************************************ 00:05:04.290 22:14:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.290 [2024-12-14 22:14:25.148957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:04.290 [2024-12-14 22:14:25.149031] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111326 ] 00:05:04.549 [2024-12-14 22:14:25.227518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.549 [2024-12-14 22:14:25.250590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.486 test_start 00:05:05.486 oneshot 00:05:05.486 tick 100 00:05:05.486 tick 100 00:05:05.486 tick 250 00:05:05.486 tick 100 00:05:05.486 tick 100 00:05:05.486 tick 100 00:05:05.486 tick 250 00:05:05.486 tick 500 00:05:05.486 tick 100 00:05:05.486 tick 100 00:05:05.486 tick 250 00:05:05.486 tick 100 00:05:05.486 tick 100 00:05:05.486 test_end 00:05:05.486 00:05:05.486 real 0m1.155s 00:05:05.486 user 0m1.071s 00:05:05.486 sys 0m0.079s 00:05:05.486 22:14:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.486 22:14:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.486 ************************************ 00:05:05.486 END TEST event_reactor 00:05:05.486 ************************************ 00:05:05.486 22:14:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.486 22:14:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.486 22:14:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.486 22:14:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.486 ************************************ 00:05:05.486 START TEST event_reactor_perf 00:05:05.486 ************************************ 00:05:05.486 22:14:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.746 [2024-12-14 22:14:26.373175] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:05.746 [2024-12-14 22:14:26.373255] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111572 ] 00:05:05.746 [2024-12-14 22:14:26.450029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.746 [2024-12-14 22:14:26.473461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.684 test_start 00:05:06.684 test_end 00:05:06.684 Performance: 520073 events per second 00:05:06.684 00:05:06.684 real 0m1.155s 00:05:06.684 user 0m1.075s 00:05:06.684 sys 0m0.075s 00:05:06.684 22:14:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.684 22:14:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.684 ************************************ 00:05:06.684 END TEST event_reactor_perf 00:05:06.684 ************************************ 00:05:06.684 22:14:27 event -- event/event.sh@49 -- # uname -s 00:05:06.684 22:14:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.684 22:14:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.684 22:14:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.684 22:14:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.684 22:14:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.944 ************************************ 00:05:06.944 START TEST event_scheduler 00:05:06.944 ************************************ 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.944 * Looking for test storage... 00:05:06.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.944 22:14:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.944 --rc genhtml_branch_coverage=1 00:05:06.944 --rc genhtml_function_coverage=1 00:05:06.944 --rc genhtml_legend=1 00:05:06.944 --rc geninfo_all_blocks=1 00:05:06.944 --rc geninfo_unexecuted_blocks=1 00:05:06.944 00:05:06.944 ' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.944 --rc genhtml_branch_coverage=1 00:05:06.944 --rc genhtml_function_coverage=1 00:05:06.944 --rc genhtml_legend=1 00:05:06.944 --rc geninfo_all_blocks=1 00:05:06.944 --rc geninfo_unexecuted_blocks=1 00:05:06.944 00:05:06.944 ' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.944 --rc genhtml_branch_coverage=1 00:05:06.944 --rc genhtml_function_coverage=1 00:05:06.944 --rc genhtml_legend=1 00:05:06.944 --rc geninfo_all_blocks=1 00:05:06.944 --rc geninfo_unexecuted_blocks=1 00:05:06.944 00:05:06.944 ' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.944 --rc genhtml_branch_coverage=1 00:05:06.944 --rc genhtml_function_coverage=1 00:05:06.944 --rc genhtml_legend=1 00:05:06.944 --rc geninfo_all_blocks=1 00:05:06.944 --rc geninfo_unexecuted_blocks=1 00:05:06.944 00:05:06.944 ' 00:05:06.944 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.944 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=111854 00:05:06.944 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.944 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.944 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 111854 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 111854 ']' 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.944 22:14:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.944 [2024-12-14 22:14:27.805129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:06.944 [2024-12-14 22:14:27.805174] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111854 ] 00:05:07.204 [2024-12-14 22:14:27.875328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.204 [2024-12-14 22:14:27.900874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.204 [2024-12-14 22:14:27.901020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.204 [2024-12-14 22:14:27.900985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.204 [2024-12-14 22:14:27.901021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:07.204 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.204 [2024-12-14 22:14:27.965746] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:07.204 [2024-12-14 22:14:27.965764] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.204 [2024-12-14 22:14:27.965776] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.204 [2024-12-14 22:14:27.965783] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.204 [2024-12-14 22:14:27.965791] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.204 22:14:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.204 22:14:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.204 [2024-12-14 22:14:28.035505] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.204 22:14:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.204 22:14:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.204 22:14:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.204 22:14:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.204 22:14:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.204 ************************************ 00:05:07.204 START TEST scheduler_create_thread 00:05:07.204 ************************************ 00:05:07.204 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:07.204 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.204 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.204 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 2 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 3 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 4 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 5 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 6 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 7 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 8 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 9 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 10 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.464 22:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.403 22:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.403 22:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:08.403 22:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.403 22:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.781 22:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.781 22:14:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.781 22:14:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.781 22:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.781 22:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.718 22:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.718 00:05:10.718 real 0m3.382s 00:05:10.718 user 0m0.027s 00:05:10.718 sys 0m0.004s 00:05:10.718 22:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.718 22:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.718 ************************************ 00:05:10.718 END TEST scheduler_create_thread 00:05:10.718 ************************************ 00:05:10.718 22:14:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.718 22:14:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 111854 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 111854 ']' 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 111854 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111854 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111854' 00:05:10.718 killing process with pid 111854 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 111854 00:05:10.718 22:14:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 111854 00:05:10.977 [2024-12-14 22:14:31.835578] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.236 00:05:11.236 real 0m4.452s 00:05:11.236 user 0m7.864s 00:05:11.236 sys 0m0.373s 00:05:11.236 22:14:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.236 22:14:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.236 ************************************ 00:05:11.237 END TEST event_scheduler 00:05:11.237 ************************************ 00:05:11.237 22:14:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.237 22:14:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.237 22:14:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.237 22:14:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.237 22:14:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.237 ************************************ 00:05:11.237 START TEST app_repeat 00:05:11.237 ************************************ 00:05:11.237 22:14:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.237 22:14:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.496 22:14:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112577 00:05:11.496 22:14:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.496 22:14:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.496 22:14:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112577' 00:05:11.496 Process app_repeat pid: 112577 00:05:11.496 22:14:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.497 22:14:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.497 spdk_app_start Round 0 00:05:11.497 22:14:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112577 /var/tmp/spdk-nbd.sock 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112577 ']' 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.497 [2024-12-14 22:14:32.147471] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:11.497 [2024-12-14 22:14:32.147529] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112577 ] 00:05:11.497 [2024-12-14 22:14:32.226479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.497 [2024-12-14 22:14:32.250175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.497 [2024-12-14 22:14:32.250183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.497 22:14:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.497 22:14:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.756 Malloc0 00:05:11.756 22:14:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.015 Malloc1 00:05:12.015 22:14:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.015 22:14:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.274 /dev/nbd0 00:05:12.274 22:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.274 22:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.274 22:14:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.274 1+0 records in 00:05:12.274 1+0 records out 00:05:12.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152131 s, 26.9 MB/s 00:05:12.275 22:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.275 22:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.275 22:14:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.275 22:14:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.275 22:14:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.275 22:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.275 22:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.275 22:14:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.533 /dev/nbd1 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.534 1+0 records in 00:05:12.534 1+0 records out 00:05:12.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171421 s, 23.9 MB/s 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.534 22:14:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.534 22:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.793 { 00:05:12.793 "nbd_device": "/dev/nbd0", 00:05:12.793 "bdev_name": "Malloc0" 00:05:12.793 }, 00:05:12.793 { 00:05:12.793 "nbd_device": "/dev/nbd1", 00:05:12.793 "bdev_name": "Malloc1" 00:05:12.793 } 00:05:12.793 ]' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.793 { 00:05:12.793 "nbd_device": "/dev/nbd0", 00:05:12.793 "bdev_name": "Malloc0" 00:05:12.793 }, 00:05:12.793 { 00:05:12.793 "nbd_device": "/dev/nbd1", 00:05:12.793 "bdev_name": "Malloc1" 00:05:12.793 } 00:05:12.793 ]' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.793 /dev/nbd1' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.793 /dev/nbd1' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.793 256+0 records in 00:05:12.793 256+0 records out 00:05:12.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105798 s, 99.1 MB/s 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.793 256+0 records in 00:05:12.793 256+0 records out 00:05:12.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141082 s, 74.3 MB/s 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.793 256+0 records in 00:05:12.793 256+0 records out 00:05:12.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155415 s, 67.5 MB/s 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.793 22:14:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.794 22:14:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.794 22:14:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.053 22:14:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.312 22:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.571 22:14:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.571 22:14:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.830 22:14:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.830 [2024-12-14 22:14:34.642689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.830 [2024-12-14 22:14:34.662426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.830 [2024-12-14 22:14:34.662426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.830 [2024-12-14 22:14:34.702759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.830 [2024-12-14 22:14:34.702798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.123 22:14:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.123 22:14:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.123 spdk_app_start Round 1 00:05:17.123 22:14:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112577 /var/tmp/spdk-nbd.sock 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112577 ']' 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.123 22:14:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.123 22:14:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.123 Malloc0 00:05:17.123 22:14:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.382 Malloc1 00:05:17.382 22:14:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.382 22:14:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.641 /dev/nbd0 00:05:17.641 22:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.641 22:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.641 1+0 records in 00:05:17.641 1+0 records out 00:05:17.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240595 s, 17.0 MB/s 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.641 22:14:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.641 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.641 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.641 22:14:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.900 /dev/nbd1 00:05:17.900 22:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.900 22:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.900 22:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.901 1+0 records in 00:05:17.901 1+0 records out 00:05:17.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187491 s, 21.8 MB/s 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.901 22:14:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.901 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.901 22:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.901 22:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.901 22:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.901 22:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.159 { 00:05:18.159 "nbd_device": "/dev/nbd0", 00:05:18.159 "bdev_name": "Malloc0" 00:05:18.159 }, 00:05:18.159 { 00:05:18.159 "nbd_device": "/dev/nbd1", 00:05:18.159 "bdev_name": "Malloc1" 00:05:18.159 } 00:05:18.159 ]' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.159 { 00:05:18.159 "nbd_device": "/dev/nbd0", 00:05:18.159 "bdev_name": "Malloc0" 00:05:18.159 }, 00:05:18.159 { 00:05:18.159 "nbd_device": "/dev/nbd1", 00:05:18.159 "bdev_name": "Malloc1" 00:05:18.159 } 00:05:18.159 ]' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.159 /dev/nbd1' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.159 /dev/nbd1' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.159 22:14:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.159 256+0 records in 00:05:18.159 256+0 records out 00:05:18.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106846 s, 98.1 MB/s 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.160 256+0 records in 00:05:18.160 256+0 records out 00:05:18.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138068 s, 75.9 MB/s 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.160 256+0 records in 00:05:18.160 256+0 records out 00:05:18.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155906 s, 67.3 MB/s 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.160 22:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.418 22:14:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.676 22:14:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.936 22:14:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.936 22:14:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.195 22:14:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.195 [2024-12-14 22:14:39.980605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.195 [2024-12-14 22:14:40.000666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.195 [2024-12-14 22:14:40.000666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.195 [2024-12-14 22:14:40.042717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.195 [2024-12-14 22:14:40.042754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.481 22:14:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.481 22:14:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.481 spdk_app_start Round 2 00:05:22.481 22:14:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112577 /var/tmp/spdk-nbd.sock 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112577 ']' 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.481 22:14:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.481 22:14:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.481 22:14:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.481 22:14:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.481 Malloc0 00:05:22.481 22:14:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.739 Malloc1 00:05:22.739 22:14:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.739 22:14:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.997 /dev/nbd0 00:05:22.997 22:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.997 22:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.997 22:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.997 22:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.997 22:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.997 22:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.998 1+0 records in 00:05:22.998 1+0 records out 00:05:22.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00842806 s, 486 kB/s 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.998 22:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.998 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.998 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.998 22:14:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.256 /dev/nbd1 00:05:23.256 22:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.256 22:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.256 22:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.256 1+0 records in 00:05:23.257 1+0 records out 00:05:23.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002205 s, 18.6 MB/s 00:05:23.257 22:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.257 22:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.257 22:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.257 22:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.257 22:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.257 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.257 22:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.257 22:14:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.257 22:14:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.257 22:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.516 { 00:05:23.516 "nbd_device": "/dev/nbd0", 00:05:23.516 "bdev_name": "Malloc0" 00:05:23.516 }, 00:05:23.516 { 00:05:23.516 "nbd_device": "/dev/nbd1", 00:05:23.516 "bdev_name": "Malloc1" 00:05:23.516 } 00:05:23.516 ]' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.516 { 00:05:23.516 "nbd_device": "/dev/nbd0", 00:05:23.516 "bdev_name": "Malloc0" 00:05:23.516 }, 00:05:23.516 { 00:05:23.516 "nbd_device": "/dev/nbd1", 00:05:23.516 "bdev_name": "Malloc1" 00:05:23.516 } 00:05:23.516 ]' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.516 /dev/nbd1' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.516 /dev/nbd1' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.516 256+0 records in 00:05:23.516 256+0 records out 00:05:23.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00992906 s, 106 MB/s 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.516 256+0 records in 00:05:23.516 256+0 records out 00:05:23.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146397 s, 71.6 MB/s 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.516 256+0 records in 00:05:23.516 256+0 records out 00:05:23.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153466 s, 68.3 MB/s 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.516 22:14:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.775 22:14:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.034 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.293 22:14:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.293 22:14:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.293 22:14:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.551 [2024-12-14 22:14:45.305673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.551 [2024-12-14 22:14:45.325481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.551 [2024-12-14 22:14:45.325482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.551 [2024-12-14 22:14:45.366267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.551 [2024-12-14 22:14:45.366306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.837 22:14:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112577 /var/tmp/spdk-nbd.sock 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112577 ']' 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.837 22:14:48 event.app_repeat -- event/event.sh@39 -- # killprocess 112577 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112577 ']' 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112577 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112577 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112577' 00:05:27.837 killing process with pid 112577 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112577 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112577 00:05:27.837 spdk_app_start is called in Round 0. 00:05:27.837 Shutdown signal received, stop current app iteration 00:05:27.837 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:27.837 spdk_app_start is called in Round 1. 00:05:27.837 Shutdown signal received, stop current app iteration 00:05:27.837 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:27.837 spdk_app_start is called in Round 2. 00:05:27.837 Shutdown signal received, stop current app iteration 00:05:27.837 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:27.837 spdk_app_start is called in Round 3. 00:05:27.837 Shutdown signal received, stop current app iteration 00:05:27.837 22:14:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.837 22:14:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.837 00:05:27.837 real 0m16.451s 00:05:27.837 user 0m36.332s 00:05:27.837 sys 0m2.512s 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.837 22:14:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.837 ************************************ 00:05:27.837 END TEST app_repeat 00:05:27.837 ************************************ 00:05:27.837 22:14:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.837 22:14:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.837 22:14:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.837 22:14:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.837 22:14:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.837 ************************************ 00:05:27.837 START TEST cpu_locks 00:05:27.837 ************************************ 00:05:27.837 22:14:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.837 * Looking for test storage... 00:05:28.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.096 22:14:48 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.096 22:14:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.096 22:14:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.096 22:14:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.096 22:14:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.096 22:14:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.097 22:14:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.097 --rc genhtml_branch_coverage=1 00:05:28.097 --rc genhtml_function_coverage=1 00:05:28.097 --rc genhtml_legend=1 00:05:28.097 --rc geninfo_all_blocks=1 00:05:28.097 --rc geninfo_unexecuted_blocks=1 00:05:28.097 00:05:28.097 ' 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.097 --rc genhtml_branch_coverage=1 00:05:28.097 --rc genhtml_function_coverage=1 00:05:28.097 --rc genhtml_legend=1 00:05:28.097 --rc geninfo_all_blocks=1 00:05:28.097 --rc geninfo_unexecuted_blocks=1 00:05:28.097 00:05:28.097 ' 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.097 --rc genhtml_branch_coverage=1 00:05:28.097 --rc genhtml_function_coverage=1 00:05:28.097 --rc genhtml_legend=1 00:05:28.097 --rc geninfo_all_blocks=1 00:05:28.097 --rc geninfo_unexecuted_blocks=1 00:05:28.097 00:05:28.097 ' 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.097 --rc genhtml_branch_coverage=1 00:05:28.097 --rc genhtml_function_coverage=1 00:05:28.097 --rc genhtml_legend=1 00:05:28.097 --rc geninfo_all_blocks=1 00:05:28.097 --rc geninfo_unexecuted_blocks=1 00:05:28.097 00:05:28.097 ' 00:05:28.097 22:14:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.097 22:14:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.097 22:14:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.097 22:14:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.097 22:14:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.097 ************************************ 00:05:28.097 START TEST default_locks 00:05:28.097 ************************************ 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=115683 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 115683 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115683 ']' 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.097 22:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.097 [2024-12-14 22:14:48.899486] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:28.097 [2024-12-14 22:14:48.899529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115683 ] 00:05:28.097 [2024-12-14 22:14:48.972242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.356 [2024-12-14 22:14:48.995086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.356 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.356 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:28.356 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 115683 00:05:28.356 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 115683 00:05:28.356 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.615 lslocks: write error 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 115683 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 115683 ']' 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 115683 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115683 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115683' 00:05:28.615 killing process with pid 115683 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 115683 00:05:28.615 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 115683 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 115683 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 115683 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 115683 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115683 ']' 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (115683) - No such process 00:05:28.875 ERROR: process (pid: 115683) is no longer running 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.875 00:05:28.875 real 0m0.829s 00:05:28.875 user 0m0.770s 00:05:28.875 sys 0m0.411s 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.875 22:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.875 ************************************ 00:05:28.875 END TEST default_locks 00:05:28.875 ************************************ 00:05:28.875 22:14:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.875 22:14:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.875 22:14:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.875 22:14:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.875 ************************************ 00:05:28.875 START TEST default_locks_via_rpc 00:05:28.875 ************************************ 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=115759 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 115759 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 115759 ']' 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.875 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.876 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.876 22:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.135 [2024-12-14 22:14:49.797073] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:29.135 [2024-12-14 22:14:49.797117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115759 ] 00:05:29.135 [2024-12-14 22:14:49.867601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.135 [2024-12-14 22:14:49.890263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.393 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 115759 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 115759 00:05:29.394 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 115759 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 115759 ']' 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 115759 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115759 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115759' 00:05:29.961 killing process with pid 115759 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 115759 00:05:29.961 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 115759 00:05:30.221 00:05:30.221 real 0m1.168s 00:05:30.221 user 0m1.119s 00:05:30.221 sys 0m0.553s 00:05:30.221 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.221 22:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.221 ************************************ 00:05:30.221 END TEST default_locks_via_rpc 00:05:30.221 ************************************ 00:05:30.221 22:14:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:30.221 22:14:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.221 22:14:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.221 22:14:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.221 ************************************ 00:05:30.221 START TEST non_locking_app_on_locked_coremask 00:05:30.221 ************************************ 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116013 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 116013 /var/tmp/spdk.sock 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116013 ']' 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.221 22:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.221 [2024-12-14 22:14:51.021439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:30.221 [2024-12-14 22:14:51.021474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116013 ] 00:05:30.221 [2024-12-14 22:14:51.095546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.480 [2024-12-14 22:14:51.118531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.480 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.480 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116040 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 116040 /var/tmp/spdk2.sock 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116040 ']' 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.481 22:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.740 [2024-12-14 22:14:51.365045] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:30.740 [2024-12-14 22:14:51.365093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116040 ] 00:05:30.740 [2024-12-14 22:14:51.449827] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.740 [2024-12-14 22:14:51.449852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.740 [2024-12-14 22:14:51.496748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.677 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.678 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.678 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 116013 00:05:31.678 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116013 00:05:31.678 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.936 lslocks: write error 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 116013 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116013 ']' 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116013 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.936 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116013 00:05:32.194 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.194 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.194 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116013' 00:05:32.194 killing process with pid 116013 00:05:32.194 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116013 00:05:32.194 22:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116013 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 116040 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116040 ']' 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116040 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116040 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116040' 00:05:32.762 killing process with pid 116040 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116040 00:05:32.762 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116040 00:05:33.021 00:05:33.021 real 0m2.794s 00:05:33.021 user 0m2.951s 00:05:33.021 sys 0m0.942s 00:05:33.021 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.021 22:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.021 ************************************ 00:05:33.021 END TEST non_locking_app_on_locked_coremask 00:05:33.021 ************************************ 00:05:33.021 22:14:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.021 22:14:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.021 22:14:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.021 22:14:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.021 ************************************ 00:05:33.021 START TEST locking_app_on_unlocked_coremask 00:05:33.021 ************************************ 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116495 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 116495 /var/tmp/spdk.sock 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116495 ']' 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.021 22:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.021 [2024-12-14 22:14:53.895152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.022 [2024-12-14 22:14:53.895195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116495 ] 00:05:33.281 [2024-12-14 22:14:53.967297] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.281 [2024-12-14 22:14:53.967324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.281 [2024-12-14 22:14:53.987693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116648 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 116648 /var/tmp/spdk2.sock 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116648 ']' 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.540 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 [2024-12-14 22:14:54.248607] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.540 [2024-12-14 22:14:54.248660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116648 ] 00:05:33.540 [2024-12-14 22:14:54.339118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.540 [2024-12-14 22:14:54.381243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.106 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.106 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.106 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 116648 00:05:34.106 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116648 00:05:34.106 22:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.365 lslocks: write error 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 116495 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116495 ']' 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116495 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116495 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116495' 00:05:34.365 killing process with pid 116495 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116495 00:05:34.365 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116495 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 116648 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116648 ']' 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116648 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.932 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116648 00:05:35.191 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.191 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.191 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116648' 00:05:35.191 killing process with pid 116648 00:05:35.191 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116648 00:05:35.191 22:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116648 00:05:35.450 00:05:35.450 real 0m2.298s 00:05:35.450 user 0m2.319s 00:05:35.450 sys 0m0.845s 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.450 ************************************ 00:05:35.450 END TEST locking_app_on_unlocked_coremask 00:05:35.450 ************************************ 00:05:35.450 22:14:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.450 22:14:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.450 22:14:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.450 22:14:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.450 ************************************ 00:05:35.450 START TEST locking_app_on_locked_coremask 00:05:35.450 ************************************ 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=116978 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 116978 /var/tmp/spdk.sock 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116978 ']' 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.450 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.450 [2024-12-14 22:14:56.263296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.450 [2024-12-14 22:14:56.263340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116978 ] 00:05:35.710 [2024-12-14 22:14:56.338155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.710 [2024-12-14 22:14:56.358160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=116986 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 116986 /var/tmp/spdk2.sock 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 116986 /var/tmp/spdk2.sock 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 116986 /var/tmp/spdk2.sock 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116986 ']' 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.710 22:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.968 [2024-12-14 22:14:56.614551] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.968 [2024-12-14 22:14:56.614599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116986 ] 00:05:35.969 [2024-12-14 22:14:56.702720] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 116978 has claimed it. 00:05:35.969 [2024-12-14 22:14:56.702759] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (116986) - No such process 00:05:36.536 ERROR: process (pid: 116986) is no longer running 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 116978 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116978 00:05:36.536 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.104 lslocks: write error 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 116978 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116978 ']' 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116978 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116978 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116978' 00:05:37.104 killing process with pid 116978 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116978 00:05:37.104 22:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116978 00:05:37.363 00:05:37.363 real 0m1.926s 00:05:37.363 user 0m2.082s 00:05:37.363 sys 0m0.655s 00:05:37.363 22:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.363 22:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.363 ************************************ 00:05:37.363 END TEST locking_app_on_locked_coremask 00:05:37.363 ************************************ 00:05:37.363 22:14:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.363 22:14:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.363 22:14:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.363 22:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.363 ************************************ 00:05:37.363 START TEST locking_overlapped_coremask 00:05:37.363 ************************************ 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117358 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117358 /var/tmp/spdk.sock 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117358 ']' 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.363 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.622 [2024-12-14 22:14:58.261567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.623 [2024-12-14 22:14:58.261608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117358 ] 00:05:37.623 [2024-12-14 22:14:58.335967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.623 [2024-12-14 22:14:58.361140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.623 [2024-12-14 22:14:58.361247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.623 [2024-12-14 22:14:58.361248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117457 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117457 /var/tmp/spdk2.sock 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117457 /var/tmp/spdk2.sock 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117457 /var/tmp/spdk2.sock 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117457 ']' 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.882 22:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.882 [2024-12-14 22:14:58.604419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.882 [2024-12-14 22:14:58.604467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117457 ] 00:05:37.882 [2024-12-14 22:14:58.695750] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117358 has claimed it. 00:05:37.882 [2024-12-14 22:14:58.695784] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117457) - No such process 00:05:38.450 ERROR: process (pid: 117457) is no longer running 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117358 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117358 ']' 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117358 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117358 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117358' 00:05:38.450 killing process with pid 117358 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117358 00:05:38.450 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117358 00:05:38.709 00:05:38.709 real 0m1.378s 00:05:38.709 user 0m3.824s 00:05:38.709 sys 0m0.387s 00:05:38.709 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.709 22:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.709 ************************************ 00:05:38.709 END TEST locking_overlapped_coremask 00:05:38.709 ************************************ 00:05:38.968 22:14:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.968 22:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.968 22:14:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.968 22:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.968 ************************************ 00:05:38.968 START TEST locking_overlapped_coremask_via_rpc 00:05:38.968 ************************************ 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117670 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 117670 /var/tmp/spdk.sock 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117670 ']' 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.968 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.968 [2024-12-14 22:14:59.699774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.968 [2024-12-14 22:14:59.699816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117670 ] 00:05:38.968 [2024-12-14 22:14:59.772495] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.968 [2024-12-14 22:14:59.772518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.968 [2024-12-14 22:14:59.797737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.969 [2024-12-14 22:14:59.797848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.969 [2024-12-14 22:14:59.797848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117716 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 117716 /var/tmp/spdk2.sock 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117716 ']' 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.228 22:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.228 [2024-12-14 22:15:00.039685] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:39.228 [2024-12-14 22:15:00.039738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117716 ] 00:05:39.487 [2024-12-14 22:15:00.142542] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.487 [2024-12-14 22:15:00.142574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.487 [2024-12-14 22:15:00.197080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.487 [2024-12-14 22:15:00.197175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.487 [2024-12-14 22:15:00.197175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.055 [2024-12-14 22:15:00.915973] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117670 has claimed it. 00:05:40.055 request: 00:05:40.055 { 00:05:40.055 "method": "framework_enable_cpumask_locks", 00:05:40.055 "req_id": 1 00:05:40.055 } 00:05:40.055 Got JSON-RPC error response 00:05:40.055 response: 00:05:40.055 { 00:05:40.055 "code": -32603, 00:05:40.055 "message": "Failed to claim CPU core: 2" 00:05:40.055 } 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 117670 /var/tmp/spdk.sock 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117670 ']' 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.055 22:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 117716 /var/tmp/spdk2.sock 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117716 ']' 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.313 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.572 00:05:40.572 real 0m1.708s 00:05:40.572 user 0m0.862s 00:05:40.572 sys 0m0.143s 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.572 22:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.572 ************************************ 00:05:40.572 END TEST locking_overlapped_coremask_via_rpc 00:05:40.572 ************************************ 00:05:40.572 22:15:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.572 22:15:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117670 ]] 00:05:40.572 22:15:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117670 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117670 ']' 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117670 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117670 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117670' 00:05:40.572 killing process with pid 117670 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117670 00:05:40.572 22:15:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117670 00:05:41.141 22:15:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117716 ]] 00:05:41.141 22:15:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117716 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117716 ']' 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117716 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117716 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117716' 00:05:41.141 killing process with pid 117716 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117716 00:05:41.141 22:15:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117716 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117670 ]] 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117670 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117670 ']' 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117670 00:05:41.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117670) - No such process 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117670 is not found' 00:05:41.400 Process with pid 117670 is not found 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117716 ]] 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117716 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117716 ']' 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117716 00:05:41.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117716) - No such process 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117716 is not found' 00:05:41.400 Process with pid 117716 is not found 00:05:41.400 22:15:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.400 00:05:41.400 real 0m13.471s 00:05:41.400 user 0m23.813s 00:05:41.400 sys 0m4.885s 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.400 22:15:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.400 ************************************ 00:05:41.400 END TEST cpu_locks 00:05:41.400 ************************************ 00:05:41.400 00:05:41.400 real 0m38.451s 00:05:41.400 user 1m14.497s 00:05:41.400 sys 0m8.387s 00:05:41.401 22:15:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.401 22:15:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.401 ************************************ 00:05:41.401 END TEST event 00:05:41.401 ************************************ 00:05:41.401 22:15:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.401 22:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.401 22:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.401 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.401 ************************************ 00:05:41.401 START TEST thread 00:05:41.401 ************************************ 00:05:41.401 22:15:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.660 * Looking for test storage... 00:05:41.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.660 22:15:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.660 22:15:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.660 22:15:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.660 22:15:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.660 22:15:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.660 22:15:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.660 22:15:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.660 22:15:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.660 22:15:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.660 22:15:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.660 22:15:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.660 22:15:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.660 22:15:02 thread -- scripts/common.sh@345 -- # : 1 00:05:41.660 22:15:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.660 22:15:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.660 22:15:02 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.660 22:15:02 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.660 22:15:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.660 22:15:02 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.660 22:15:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.660 22:15:02 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.660 22:15:02 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.660 22:15:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.660 22:15:02 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.660 22:15:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.660 22:15:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.660 22:15:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.660 22:15:02 thread -- scripts/common.sh@368 -- # return 0 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.660 --rc genhtml_branch_coverage=1 00:05:41.660 --rc genhtml_function_coverage=1 00:05:41.660 --rc genhtml_legend=1 00:05:41.660 --rc geninfo_all_blocks=1 00:05:41.660 --rc geninfo_unexecuted_blocks=1 00:05:41.660 00:05:41.660 ' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.660 --rc genhtml_branch_coverage=1 00:05:41.660 --rc genhtml_function_coverage=1 00:05:41.660 --rc genhtml_legend=1 00:05:41.660 --rc geninfo_all_blocks=1 00:05:41.660 --rc geninfo_unexecuted_blocks=1 00:05:41.660 00:05:41.660 ' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.660 --rc genhtml_branch_coverage=1 00:05:41.660 --rc genhtml_function_coverage=1 00:05:41.660 --rc genhtml_legend=1 00:05:41.660 --rc geninfo_all_blocks=1 00:05:41.660 --rc geninfo_unexecuted_blocks=1 00:05:41.660 00:05:41.660 ' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.660 --rc genhtml_branch_coverage=1 00:05:41.660 --rc genhtml_function_coverage=1 00:05:41.660 --rc genhtml_legend=1 00:05:41.660 --rc geninfo_all_blocks=1 00:05:41.660 --rc geninfo_unexecuted_blocks=1 00:05:41.660 00:05:41.660 ' 00:05:41.660 22:15:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.660 22:15:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.660 ************************************ 00:05:41.660 START TEST thread_poller_perf 00:05:41.660 ************************************ 00:05:41.660 22:15:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.660 [2024-12-14 22:15:02.447541] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:41.660 [2024-12-14 22:15:02.447609] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118399 ] 00:05:41.660 [2024-12-14 22:15:02.526166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.919 [2024-12-14 22:15:02.548883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.919 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.855 [2024-12-14T21:15:03.739Z] ====================================== 00:05:42.855 [2024-12-14T21:15:03.739Z] busy:2108267876 (cyc) 00:05:42.855 [2024-12-14T21:15:03.739Z] total_run_count: 419000 00:05:42.855 [2024-12-14T21:15:03.739Z] tsc_hz: 2100000000 (cyc) 00:05:42.855 [2024-12-14T21:15:03.739Z] ====================================== 00:05:42.855 [2024-12-14T21:15:03.739Z] poller_cost: 5031 (cyc), 2395 (nsec) 00:05:42.855 00:05:42.855 real 0m1.165s 00:05:42.855 user 0m1.087s 00:05:42.855 sys 0m0.074s 00:05:42.855 22:15:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.855 22:15:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.855 ************************************ 00:05:42.855 END TEST thread_poller_perf 00:05:42.855 ************************************ 00:05:42.855 22:15:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.855 22:15:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:42.855 22:15:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.855 22:15:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.855 ************************************ 00:05:42.855 START TEST thread_poller_perf 00:05:42.855 ************************************ 00:05:42.855 22:15:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.855 [2024-12-14 22:15:03.671038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.855 [2024-12-14 22:15:03.671085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118564 ] 00:05:42.855 [2024-12-14 22:15:03.726173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.114 [2024-12-14 22:15:03.748978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.114 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.052 [2024-12-14T21:15:04.936Z] ====================================== 00:05:44.052 [2024-12-14T21:15:04.936Z] busy:2101509682 (cyc) 00:05:44.052 [2024-12-14T21:15:04.936Z] total_run_count: 5138000 00:05:44.052 [2024-12-14T21:15:04.936Z] tsc_hz: 2100000000 (cyc) 00:05:44.052 [2024-12-14T21:15:04.936Z] ====================================== 00:05:44.052 [2024-12-14T21:15:04.936Z] poller_cost: 409 (cyc), 194 (nsec) 00:05:44.052 00:05:44.052 real 0m1.125s 00:05:44.052 user 0m1.067s 00:05:44.052 sys 0m0.054s 00:05:44.052 22:15:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.052 22:15:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.052 ************************************ 00:05:44.052 END TEST thread_poller_perf 00:05:44.052 ************************************ 00:05:44.052 22:15:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.052 00:05:44.052 real 0m2.603s 00:05:44.052 user 0m2.318s 00:05:44.052 sys 0m0.299s 00:05:44.052 22:15:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.052 22:15:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.052 ************************************ 00:05:44.052 END TEST thread 00:05:44.052 ************************************ 00:05:44.052 22:15:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.052 22:15:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.052 22:15:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.052 22:15:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.052 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:05:44.052 ************************************ 00:05:44.052 START TEST app_cmdline 00:05:44.052 ************************************ 00:05:44.052 22:15:04 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.311 * Looking for test storage... 00:05:44.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.311 22:15:04 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.311 22:15:04 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.311 22:15:04 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.311 22:15:05 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.311 22:15:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.312 22:15:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.312 --rc genhtml_branch_coverage=1 00:05:44.312 --rc genhtml_function_coverage=1 00:05:44.312 --rc genhtml_legend=1 00:05:44.312 --rc geninfo_all_blocks=1 00:05:44.312 --rc geninfo_unexecuted_blocks=1 00:05:44.312 00:05:44.312 ' 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.312 --rc genhtml_branch_coverage=1 00:05:44.312 --rc genhtml_function_coverage=1 00:05:44.312 --rc genhtml_legend=1 00:05:44.312 --rc geninfo_all_blocks=1 00:05:44.312 --rc geninfo_unexecuted_blocks=1 00:05:44.312 00:05:44.312 ' 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.312 --rc genhtml_branch_coverage=1 00:05:44.312 --rc genhtml_function_coverage=1 00:05:44.312 --rc genhtml_legend=1 00:05:44.312 --rc geninfo_all_blocks=1 00:05:44.312 --rc geninfo_unexecuted_blocks=1 00:05:44.312 00:05:44.312 ' 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.312 --rc genhtml_branch_coverage=1 00:05:44.312 --rc genhtml_function_coverage=1 00:05:44.312 --rc genhtml_legend=1 00:05:44.312 --rc geninfo_all_blocks=1 00:05:44.312 --rc geninfo_unexecuted_blocks=1 00:05:44.312 00:05:44.312 ' 00:05:44.312 22:15:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.312 22:15:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=118881 00:05:44.312 22:15:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 118881 00:05:44.312 22:15:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 118881 ']' 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.312 22:15:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 [2024-12-14 22:15:05.123708] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.312 [2024-12-14 22:15:05.123756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118881 ] 00:05:44.571 [2024-12-14 22:15:05.196385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.571 [2024-12-14 22:15:05.219205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.571 22:15:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.571 22:15:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:44.571 22:15:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:44.829 { 00:05:44.829 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:44.829 "fields": { 00:05:44.829 "major": 25, 00:05:44.829 "minor": 1, 00:05:44.829 "patch": 0, 00:05:44.829 "suffix": "-pre", 00:05:44.829 "commit": "e01cb43b8" 00:05:44.829 } 00:05:44.829 } 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.829 22:15:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.829 22:15:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:44.830 22:15:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.089 request: 00:05:45.089 { 00:05:45.089 "method": "env_dpdk_get_mem_stats", 00:05:45.089 "req_id": 1 00:05:45.089 } 00:05:45.089 Got JSON-RPC error response 00:05:45.089 response: 00:05:45.089 { 00:05:45.089 "code": -32601, 00:05:45.089 "message": "Method not found" 00:05:45.089 } 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.089 22:15:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 118881 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 118881 ']' 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 118881 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118881 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118881' 00:05:45.089 killing process with pid 118881 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 118881 00:05:45.089 22:15:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 118881 00:05:45.348 00:05:45.348 real 0m1.307s 00:05:45.348 user 0m1.537s 00:05:45.348 sys 0m0.440s 00:05:45.348 22:15:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.348 22:15:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.348 ************************************ 00:05:45.348 END TEST app_cmdline 00:05:45.348 ************************************ 00:05:45.608 22:15:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:45.608 22:15:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.608 22:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.608 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.608 ************************************ 00:05:45.608 START TEST version 00:05:45.608 ************************************ 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:45.608 * Looking for test storage... 00:05:45.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.608 22:15:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.608 22:15:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.608 22:15:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.608 22:15:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.608 22:15:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.608 22:15:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.608 22:15:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.608 22:15:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.608 22:15:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.608 22:15:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.608 22:15:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.608 22:15:06 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.608 22:15:06 version -- scripts/common.sh@345 -- # : 1 00:05:45.608 22:15:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.608 22:15:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.608 22:15:06 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.608 22:15:06 version -- scripts/common.sh@353 -- # local d=1 00:05:45.608 22:15:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.608 22:15:06 version -- scripts/common.sh@355 -- # echo 1 00:05:45.608 22:15:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.608 22:15:06 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.608 22:15:06 version -- scripts/common.sh@353 -- # local d=2 00:05:45.608 22:15:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.608 22:15:06 version -- scripts/common.sh@355 -- # echo 2 00:05:45.608 22:15:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.608 22:15:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.608 22:15:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.608 22:15:06 version -- scripts/common.sh@368 -- # return 0 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.608 --rc genhtml_branch_coverage=1 00:05:45.608 --rc genhtml_function_coverage=1 00:05:45.608 --rc genhtml_legend=1 00:05:45.608 --rc geninfo_all_blocks=1 00:05:45.608 --rc geninfo_unexecuted_blocks=1 00:05:45.608 00:05:45.608 ' 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.608 --rc genhtml_branch_coverage=1 00:05:45.608 --rc genhtml_function_coverage=1 00:05:45.608 --rc genhtml_legend=1 00:05:45.608 --rc geninfo_all_blocks=1 00:05:45.608 --rc geninfo_unexecuted_blocks=1 00:05:45.608 00:05:45.608 ' 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.608 --rc genhtml_branch_coverage=1 00:05:45.608 --rc genhtml_function_coverage=1 00:05:45.608 --rc genhtml_legend=1 00:05:45.608 --rc geninfo_all_blocks=1 00:05:45.608 --rc geninfo_unexecuted_blocks=1 00:05:45.608 00:05:45.608 ' 00:05:45.608 22:15:06 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.608 --rc genhtml_branch_coverage=1 00:05:45.608 --rc genhtml_function_coverage=1 00:05:45.608 --rc genhtml_legend=1 00:05:45.608 --rc geninfo_all_blocks=1 00:05:45.608 --rc geninfo_unexecuted_blocks=1 00:05:45.608 00:05:45.608 ' 00:05:45.608 22:15:06 version -- app/version.sh@17 -- # get_header_version major 00:05:45.608 22:15:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # cut -f2 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.608 22:15:06 version -- app/version.sh@17 -- # major=25 00:05:45.608 22:15:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.608 22:15:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # cut -f2 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.608 22:15:06 version -- app/version.sh@18 -- # minor=1 00:05:45.608 22:15:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.608 22:15:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # cut -f2 00:05:45.608 22:15:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.609 22:15:06 version -- app/version.sh@19 -- # patch=0 00:05:45.609 22:15:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.609 22:15:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.609 22:15:06 version -- app/version.sh@14 -- # cut -f2 00:05:45.609 22:15:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.609 22:15:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.609 22:15:06 version -- app/version.sh@22 -- # version=25.1 00:05:45.609 22:15:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.609 22:15:06 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.609 22:15:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:45.609 22:15:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.868 22:15:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.868 22:15:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.868 00:05:45.868 real 0m0.246s 00:05:45.868 user 0m0.140s 00:05:45.868 sys 0m0.151s 00:05:45.868 22:15:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.868 22:15:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.868 ************************************ 00:05:45.868 END TEST version 00:05:45.868 ************************************ 00:05:45.868 22:15:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.868 22:15:06 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.868 22:15:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.868 22:15:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.868 22:15:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.868 22:15:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:45.868 22:15:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.868 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.868 22:15:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:45.868 22:15:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:45.868 22:15:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.868 22:15:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.868 22:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.868 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.868 ************************************ 00:05:45.868 START TEST nvmf_tcp 00:05:45.868 ************************************ 00:05:45.868 22:15:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.868 * Looking for test storage... 00:05:45.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.868 22:15:06 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.868 22:15:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.868 22:15:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.127 22:15:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.127 22:15:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.128 22:15:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.128 22:15:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.128 22:15:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.128 22:15:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.128 --rc genhtml_branch_coverage=1 00:05:46.128 --rc genhtml_function_coverage=1 00:05:46.128 --rc genhtml_legend=1 00:05:46.128 --rc geninfo_all_blocks=1 00:05:46.128 --rc geninfo_unexecuted_blocks=1 00:05:46.128 00:05:46.128 ' 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.128 --rc genhtml_branch_coverage=1 00:05:46.128 --rc genhtml_function_coverage=1 00:05:46.128 --rc genhtml_legend=1 00:05:46.128 --rc geninfo_all_blocks=1 00:05:46.128 --rc geninfo_unexecuted_blocks=1 00:05:46.128 00:05:46.128 ' 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.128 --rc genhtml_branch_coverage=1 00:05:46.128 --rc genhtml_function_coverage=1 00:05:46.128 --rc genhtml_legend=1 00:05:46.128 --rc geninfo_all_blocks=1 00:05:46.128 --rc geninfo_unexecuted_blocks=1 00:05:46.128 00:05:46.128 ' 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.128 --rc genhtml_branch_coverage=1 00:05:46.128 --rc genhtml_function_coverage=1 00:05:46.128 --rc genhtml_legend=1 00:05:46.128 --rc geninfo_all_blocks=1 00:05:46.128 --rc geninfo_unexecuted_blocks=1 00:05:46.128 00:05:46.128 ' 00:05:46.128 22:15:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:46.128 22:15:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.128 22:15:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.128 22:15:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 ************************************ 00:05:46.128 START TEST nvmf_target_core 00:05:46.128 ************************************ 00:05:46.128 22:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:46.128 * Looking for test storage... 00:05:46.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:46.128 22:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.128 22:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.128 22:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.128 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.388 --rc genhtml_branch_coverage=1 00:05:46.388 --rc genhtml_function_coverage=1 00:05:46.388 --rc genhtml_legend=1 00:05:46.388 --rc geninfo_all_blocks=1 00:05:46.388 --rc geninfo_unexecuted_blocks=1 00:05:46.388 00:05:46.388 ' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.388 --rc genhtml_branch_coverage=1 00:05:46.388 --rc genhtml_function_coverage=1 00:05:46.388 --rc genhtml_legend=1 00:05:46.388 --rc geninfo_all_blocks=1 00:05:46.388 --rc geninfo_unexecuted_blocks=1 00:05:46.388 00:05:46.388 ' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.388 --rc genhtml_branch_coverage=1 00:05:46.388 --rc genhtml_function_coverage=1 00:05:46.388 --rc genhtml_legend=1 00:05:46.388 --rc geninfo_all_blocks=1 00:05:46.388 --rc geninfo_unexecuted_blocks=1 00:05:46.388 00:05:46.388 ' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.388 --rc genhtml_branch_coverage=1 00:05:46.388 --rc genhtml_function_coverage=1 00:05:46.388 --rc genhtml_legend=1 00:05:46.388 --rc geninfo_all_blocks=1 00:05:46.388 --rc geninfo_unexecuted_blocks=1 00:05:46.388 00:05:46.388 ' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.388 ************************************ 00:05:46.388 START TEST nvmf_abort 00:05:46.388 ************************************ 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.388 * Looking for test storage... 00:05:46.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.388 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.389 --rc genhtml_branch_coverage=1 00:05:46.389 --rc genhtml_function_coverage=1 00:05:46.389 --rc genhtml_legend=1 00:05:46.389 --rc geninfo_all_blocks=1 00:05:46.389 --rc geninfo_unexecuted_blocks=1 00:05:46.389 00:05:46.389 ' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.389 --rc genhtml_branch_coverage=1 00:05:46.389 --rc genhtml_function_coverage=1 00:05:46.389 --rc genhtml_legend=1 00:05:46.389 --rc geninfo_all_blocks=1 00:05:46.389 --rc geninfo_unexecuted_blocks=1 00:05:46.389 00:05:46.389 ' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.389 --rc genhtml_branch_coverage=1 00:05:46.389 --rc genhtml_function_coverage=1 00:05:46.389 --rc genhtml_legend=1 00:05:46.389 --rc geninfo_all_blocks=1 00:05:46.389 --rc geninfo_unexecuted_blocks=1 00:05:46.389 00:05:46.389 ' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.389 --rc genhtml_branch_coverage=1 00:05:46.389 --rc genhtml_function_coverage=1 00:05:46.389 --rc genhtml_legend=1 00:05:46.389 --rc geninfo_all_blocks=1 00:05:46.389 --rc geninfo_unexecuted_blocks=1 00:05:46.389 00:05:46.389 ' 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.389 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:46.649 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:53.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:53.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:53.219 Found net devices under 0000:af:00.0: cvl_0_0 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.219 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:53.220 Found net devices under 0000:af:00.1: cvl_0_1 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.220 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:05:53.220 00:05:53.220 --- 10.0.0.2 ping statistics --- 00:05:53.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.220 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:05:53.220 00:05:53.220 --- 10.0.0.1 ping statistics --- 00:05:53.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.220 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122864 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122864 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122864 ']' 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 [2024-12-14 22:15:13.298568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:53.220 [2024-12-14 22:15:13.298611] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.220 [2024-12-14 22:15:13.374129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.220 [2024-12-14 22:15:13.396965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.220 [2024-12-14 22:15:13.397011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.220 [2024-12-14 22:15:13.397018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.220 [2024-12-14 22:15:13.397024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.220 [2024-12-14 22:15:13.397028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.220 [2024-12-14 22:15:13.398211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.220 [2024-12-14 22:15:13.398317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.220 [2024-12-14 22:15:13.398318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 [2024-12-14 22:15:13.537241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 Malloc0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 Delay0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 [2024-12-14 22:15:13.617266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.221 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:53.221 [2024-12-14 22:15:13.749749] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:55.125 Initializing NVMe Controllers 00:05:55.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:55.125 controller IO queue size 128 less than required 00:05:55.125 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:55.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:55.125 Initialization complete. Launching workers. 00:05:55.125 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 38771 00:05:55.125 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38835, failed to submit 62 00:05:55.125 success 38775, unsuccessful 60, failed 0 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:55.125 rmmod nvme_tcp 00:05:55.125 rmmod nvme_fabrics 00:05:55.125 rmmod nvme_keyring 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122864 ']' 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122864 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122864 ']' 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122864 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122864 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:55.125 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122864' 00:05:55.125 killing process with pid 122864 00:05:55.126 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122864 00:05:55.126 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122864 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.385 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.921 00:05:57.921 real 0m11.124s 00:05:57.921 user 0m11.800s 00:05:57.921 sys 0m5.124s 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:57.921 ************************************ 00:05:57.921 END TEST nvmf_abort 00:05:57.921 ************************************ 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.921 ************************************ 00:05:57.921 START TEST nvmf_ns_hotplug_stress 00:05:57.921 ************************************ 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.921 * Looking for test storage... 00:05:57.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.921 --rc genhtml_branch_coverage=1 00:05:57.921 --rc genhtml_function_coverage=1 00:05:57.921 --rc genhtml_legend=1 00:05:57.921 --rc geninfo_all_blocks=1 00:05:57.921 --rc geninfo_unexecuted_blocks=1 00:05:57.921 00:05:57.921 ' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.921 --rc genhtml_branch_coverage=1 00:05:57.921 --rc genhtml_function_coverage=1 00:05:57.921 --rc genhtml_legend=1 00:05:57.921 --rc geninfo_all_blocks=1 00:05:57.921 --rc geninfo_unexecuted_blocks=1 00:05:57.921 00:05:57.921 ' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.921 --rc genhtml_branch_coverage=1 00:05:57.921 --rc genhtml_function_coverage=1 00:05:57.921 --rc genhtml_legend=1 00:05:57.921 --rc geninfo_all_blocks=1 00:05:57.921 --rc geninfo_unexecuted_blocks=1 00:05:57.921 00:05:57.921 ' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.921 --rc genhtml_branch_coverage=1 00:05:57.921 --rc genhtml_function_coverage=1 00:05:57.921 --rc genhtml_legend=1 00:05:57.921 --rc geninfo_all_blocks=1 00:05:57.921 --rc geninfo_unexecuted_blocks=1 00:05:57.921 00:05:57.921 ' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:57.921 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.922 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:04.492 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:04.492 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:04.492 Found net devices under 0000:af:00.0: cvl_0_0 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:04.492 Found net devices under 0000:af:00.1: cvl_0_1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.492 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:06:04.493 00:06:04.493 --- 10.0.0.2 ping statistics --- 00:06:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.493 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:06:04.493 00:06:04.493 --- 10.0.0.1 ping statistics --- 00:06:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.493 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126889 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126889 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126889 ']' 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.493 [2024-12-14 22:15:24.485618] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:04.493 [2024-12-14 22:15:24.485664] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.493 [2024-12-14 22:15:24.566664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.493 [2024-12-14 22:15:24.589128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.493 [2024-12-14 22:15:24.589162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.493 [2024-12-14 22:15:24.589169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.493 [2024-12-14 22:15:24.589175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.493 [2024-12-14 22:15:24.589180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.493 [2024-12-14 22:15:24.590486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.493 [2024-12-14 22:15:24.590594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.493 [2024-12-14 22:15:24.590596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:04.493 [2024-12-14 22:15:24.886915] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.493 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.493 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.493 [2024-12-14 22:15:25.280350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.493 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:04.752 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:05.010 Malloc0 00:06:05.010 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:05.268 Delay0 00:06:05.268 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.268 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:05.526 NULL1 00:06:05.526 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:05.784 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:05.784 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=127153 00:06:05.784 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:05.784 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.043 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.043 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:06.043 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:06.301 true 00:06:06.301 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:06.301 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.560 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.819 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:06.819 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:07.076 true 00:06:07.076 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:07.076 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.334 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.335 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:07.335 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:07.593 true 00:06:07.593 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:07.593 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.851 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.109 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:08.109 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:08.367 true 00:06:08.367 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:08.367 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.625 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.625 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:08.625 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:08.883 true 00:06:08.883 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:08.883 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.142 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.400 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:09.400 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:09.400 true 00:06:09.658 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:09.658 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.658 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.916 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:09.916 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:10.175 true 00:06:10.175 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:10.175 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.433 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.691 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:10.691 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:10.691 true 00:06:10.950 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:10.950 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.950 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.208 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:11.208 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:11.466 true 00:06:11.466 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:11.466 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.725 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.983 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:11.983 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:11.983 true 00:06:12.242 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:12.242 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.242 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.500 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:12.500 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:12.758 true 00:06:12.758 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:12.758 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.016 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.275 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:13.275 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:13.275 true 00:06:13.533 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:13.533 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.533 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.792 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:13.792 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:14.050 true 00:06:14.050 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:14.050 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.313 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.572 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:14.572 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:14.572 true 00:06:14.830 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:14.830 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.830 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.088 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:15.088 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:15.346 true 00:06:15.346 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:15.346 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.603 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.862 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:15.862 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:15.862 true 00:06:16.120 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:16.120 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.120 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.378 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:16.378 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:16.636 true 00:06:16.636 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:16.636 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.894 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.152 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:17.152 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:17.152 true 00:06:17.410 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:17.410 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.410 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.668 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:17.668 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:17.926 true 00:06:17.926 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:17.926 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.184 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.442 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:18.442 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:18.442 true 00:06:18.701 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:18.701 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.701 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.959 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:18.959 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:19.217 true 00:06:19.217 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:19.217 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.476 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.734 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:19.734 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:19.993 true 00:06:19.993 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:19.993 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.993 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.251 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:20.251 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:20.509 true 00:06:20.509 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:20.509 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.767 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.026 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:21.026 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:21.026 true 00:06:21.284 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:21.284 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.284 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.542 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:21.542 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:21.800 true 00:06:21.800 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:21.800 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.059 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.317 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:22.317 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:22.317 true 00:06:22.576 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:22.576 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.576 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.834 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:22.834 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:23.092 true 00:06:23.092 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:23.092 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.350 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.609 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:23.609 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:23.609 true 00:06:23.609 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:23.609 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.867 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.126 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:24.126 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:24.388 true 00:06:24.388 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:24.388 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.649 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.907 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:24.907 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:24.907 true 00:06:24.907 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:24.907 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.165 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.424 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:25.424 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:25.682 true 00:06:25.682 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:25.682 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.941 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.199 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:26.199 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:26.199 true 00:06:26.199 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:26.199 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.457 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.716 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:26.716 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:26.974 true 00:06:26.974 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:26.974 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.232 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.491 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:27.491 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:27.491 true 00:06:27.491 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:27.491 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.749 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.007 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:28.007 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:28.266 true 00:06:28.266 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:28.266 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.524 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.782 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:28.782 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:28.782 true 00:06:28.782 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:28.782 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.041 22:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.299 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:29.299 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:29.558 true 00:06:29.558 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:29.558 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.816 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.075 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:30.075 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:30.075 true 00:06:30.075 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:30.075 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.333 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.597 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:30.597 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:30.856 true 00:06:30.856 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:30.856 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.114 22:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.372 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:31.372 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:31.372 true 00:06:31.372 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:31.372 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.631 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.889 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:31.889 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:32.148 true 00:06:32.148 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:32.148 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.406 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.664 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:32.664 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:32.664 true 00:06:32.664 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:32.664 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.924 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.182 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:33.182 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:33.441 true 00:06:33.441 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:33.441 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.699 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.958 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:33.958 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:33.958 true 00:06:33.958 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:33.958 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.217 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.475 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:34.475 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:34.733 true 00:06:34.733 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:34.733 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.992 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.992 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:34.992 22:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:35.250 true 00:06:35.250 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:35.251 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.509 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.768 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:35.768 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:36.032 true 00:06:36.032 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:36.032 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.032 Initializing NVMe Controllers 00:06:36.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.032 Controller IO queue size 128, less than required. 00:06:36.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:36.032 Initialization complete. Launching workers. 00:06:36.032 ======================================================== 00:06:36.032 Latency(us) 00:06:36.032 Device Information : IOPS MiB/s Average min max 00:06:36.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27337.29 13.35 4682.26 2265.32 8595.64 00:06:36.032 ======================================================== 00:06:36.032 Total : 27337.29 13.35 4682.26 2265.32 8595.64 00:06:36.032 00:06:36.291 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.291 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:36.291 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:36.550 true 00:06:36.550 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127153 00:06:36.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (127153) - No such process 00:06:36.550 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 127153 00:06:36.550 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.809 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:37.068 null0 00:06:37.068 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.068 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.068 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:37.327 null1 00:06:37.327 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.327 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.327 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:37.586 null2 00:06:37.587 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.587 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.587 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:37.587 null3 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:37.846 null4 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.846 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:38.105 null5 00:06:38.105 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.105 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.105 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:38.364 null6 00:06:38.364 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.364 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.364 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:38.623 null7 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:38.623 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132876 132877 132879 132881 132883 132885 132887 132889 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.624 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.884 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.143 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.144 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.144 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.403 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.662 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.921 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.180 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.181 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.440 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.699 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.957 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.958 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.217 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.217 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.217 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.217 22:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.217 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.217 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.217 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.217 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.476 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.735 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.995 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.254 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.255 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.514 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.774 rmmod nvme_tcp 00:06:42.774 rmmod nvme_fabrics 00:06:42.774 rmmod nvme_keyring 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126889 ']' 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126889 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126889 ']' 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126889 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126889 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126889' 00:06:42.774 killing process with pid 126889 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126889 00:06:42.774 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126889 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.034 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.035 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.035 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.035 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.035 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.573 00:06:45.573 real 0m47.564s 00:06:45.573 user 3m23.152s 00:06:45.573 sys 0m17.060s 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.573 ************************************ 00:06:45.573 END TEST nvmf_ns_hotplug_stress 00:06:45.573 ************************************ 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.573 ************************************ 00:06:45.573 START TEST nvmf_delete_subsystem 00:06:45.573 ************************************ 00:06:45.573 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.573 * Looking for test storage... 00:06:45.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.573 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.574 --rc genhtml_branch_coverage=1 00:06:45.574 --rc genhtml_function_coverage=1 00:06:45.574 --rc genhtml_legend=1 00:06:45.574 --rc geninfo_all_blocks=1 00:06:45.574 --rc geninfo_unexecuted_blocks=1 00:06:45.574 00:06:45.574 ' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.574 --rc genhtml_branch_coverage=1 00:06:45.574 --rc genhtml_function_coverage=1 00:06:45.574 --rc genhtml_legend=1 00:06:45.574 --rc geninfo_all_blocks=1 00:06:45.574 --rc geninfo_unexecuted_blocks=1 00:06:45.574 00:06:45.574 ' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.574 --rc genhtml_branch_coverage=1 00:06:45.574 --rc genhtml_function_coverage=1 00:06:45.574 --rc genhtml_legend=1 00:06:45.574 --rc geninfo_all_blocks=1 00:06:45.574 --rc geninfo_unexecuted_blocks=1 00:06:45.574 00:06:45.574 ' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.574 --rc genhtml_branch_coverage=1 00:06:45.574 --rc genhtml_function_coverage=1 00:06:45.574 --rc genhtml_legend=1 00:06:45.574 --rc geninfo_all_blocks=1 00:06:45.574 --rc geninfo_unexecuted_blocks=1 00:06:45.574 00:06:45.574 ' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.574 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.149 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:52.150 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:52.150 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:52.150 Found net devices under 0000:af:00.0: cvl_0_0 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:52.150 Found net devices under 0000:af:00.1: cvl_0_1 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.150 22:16:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:06:52.150 00:06:52.150 --- 10.0.0.2 ping statistics --- 00:06:52.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.150 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:06:52.150 00:06:52.150 --- 10.0.0.1 ping statistics --- 00:06:52.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.150 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137200 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137200 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137200 ']' 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.150 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 [2024-12-14 22:16:12.235242] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:52.150 [2024-12-14 22:16:12.235284] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.150 [2024-12-14 22:16:12.313333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.151 [2024-12-14 22:16:12.334887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.151 [2024-12-14 22:16:12.334927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.151 [2024-12-14 22:16:12.334935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.151 [2024-12-14 22:16:12.334942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.151 [2024-12-14 22:16:12.334947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.151 [2024-12-14 22:16:12.336056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.151 [2024-12-14 22:16:12.336059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 [2024-12-14 22:16:12.479789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 [2024-12-14 22:16:12.500006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 NULL1 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 Delay0 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137227 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:52.151 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:52.151 [2024-12-14 22:16:12.611047] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:54.055 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.055 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.055 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 [2024-12-14 22:16:14.730051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4b140 is same with the state(6) to be set 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 starting I/O failed: -6 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.055 [2024-12-14 22:16:14.730659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1040000c80 is same with the state(6) to be set 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Write completed with error (sct=0, sc=8) 00:06:54.055 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Read completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.056 Write completed with error (sct=0, sc=8) 00:06:54.998 [2024-12-14 22:16:15.704815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48260 is same with the state(6) to be set 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 [2024-12-14 22:16:15.729620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4ac60 is same with the state(6) to be set 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 [2024-12-14 22:16:15.733324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f104000d800 is same with the state(6) to be set 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 [2024-12-14 22:16:15.733534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f104000d060 is same with the state(6) to be set 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Write completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.998 Read completed with error (sct=0, sc=8) 00:06:54.999 Write completed with error (sct=0, sc=8) 00:06:54.999 Read completed with error (sct=0, sc=8) 00:06:54.999 [2024-12-14 22:16:15.734472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f5f0 is same with the state(6) to be set 00:06:54.999 Initializing NVMe Controllers 00:06:54.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.999 Controller IO queue size 128, less than required. 00:06:54.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:54.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:54.999 Initialization complete. Launching workers. 00:06:54.999 ======================================================== 00:06:54.999 Latency(us) 00:06:54.999 Device Information : IOPS MiB/s Average min max 00:06:54.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.41 0.09 887247.97 446.02 1010803.84 00:06:54.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.44 0.08 925116.36 273.98 2001483.05 00:06:54.999 ======================================================== 00:06:54.999 Total : 343.85 0.17 905908.55 273.98 2001483.05 00:06:54.999 00:06:54.999 [2024-12-14 22:16:15.734833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48260 (9): Bad file descriptor 00:06:54.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:54.999 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.999 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:54.999 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137227 00:06:54.999 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137227 00:06:55.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137227) - No such process 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137227 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137227 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137227 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.567 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 [2024-12-14 22:16:16.263868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=137899 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:55.568 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.568 [2024-12-14 22:16:16.342363] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:56.136 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.136 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:56.136 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.703 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.703 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:56.703 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.962 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.962 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:56.962 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.528 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.528 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:57.528 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.094 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.094 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:58.094 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.661 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.661 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:58.661 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.661 Initializing NVMe Controllers 00:06:58.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.661 Controller IO queue size 128, less than required. 00:06:58.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.661 Initialization complete. Launching workers. 00:06:58.661 ======================================================== 00:06:58.661 Latency(us) 00:06:58.661 Device Information : IOPS MiB/s Average min max 00:06:58.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002057.41 1000164.73 1040979.85 00:06:58.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003930.36 1000129.02 1009912.16 00:06:58.661 ======================================================== 00:06:58.661 Total : 256.00 0.12 1002993.88 1000129.02 1040979.85 00:06:58.661 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137899 00:06:59.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (137899) - No such process 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 137899 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.229 rmmod nvme_tcp 00:06:59.229 rmmod nvme_fabrics 00:06:59.229 rmmod nvme_keyring 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137200 ']' 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137200 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137200 ']' 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137200 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137200 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137200' 00:06:59.229 killing process with pid 137200 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137200 00:06:59.229 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137200 00:06:59.229 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.229 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.229 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.229 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.489 22:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.396 00:07:01.396 real 0m16.259s 00:07:01.396 user 0m29.318s 00:07:01.396 sys 0m5.408s 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.396 ************************************ 00:07:01.396 END TEST nvmf_delete_subsystem 00:07:01.396 ************************************ 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.396 ************************************ 00:07:01.396 START TEST nvmf_host_management 00:07:01.396 ************************************ 00:07:01.396 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.656 * Looking for test storage... 00:07:01.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.656 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.657 --rc genhtml_branch_coverage=1 00:07:01.657 --rc genhtml_function_coverage=1 00:07:01.657 --rc genhtml_legend=1 00:07:01.657 --rc geninfo_all_blocks=1 00:07:01.657 --rc geninfo_unexecuted_blocks=1 00:07:01.657 00:07:01.657 ' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.657 --rc genhtml_branch_coverage=1 00:07:01.657 --rc genhtml_function_coverage=1 00:07:01.657 --rc genhtml_legend=1 00:07:01.657 --rc geninfo_all_blocks=1 00:07:01.657 --rc geninfo_unexecuted_blocks=1 00:07:01.657 00:07:01.657 ' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.657 --rc genhtml_branch_coverage=1 00:07:01.657 --rc genhtml_function_coverage=1 00:07:01.657 --rc genhtml_legend=1 00:07:01.657 --rc geninfo_all_blocks=1 00:07:01.657 --rc geninfo_unexecuted_blocks=1 00:07:01.657 00:07:01.657 ' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.657 --rc genhtml_branch_coverage=1 00:07:01.657 --rc genhtml_function_coverage=1 00:07:01.657 --rc genhtml_legend=1 00:07:01.657 --rc geninfo_all_blocks=1 00:07:01.657 --rc geninfo_unexecuted_blocks=1 00:07:01.657 00:07:01.657 ' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.657 22:16:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.235 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:08.236 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:08.236 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:08.236 Found net devices under 0000:af:00.0: cvl_0_0 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:08.236 Found net devices under 0000:af:00.1: cvl_0_1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:07:08.236 00:07:08.236 --- 10.0.0.2 ping statistics --- 00:07:08.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.236 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:08.236 00:07:08.236 --- 10.0.0.1 ping statistics --- 00:07:08.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.236 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=142054 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 142054 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142054 ']' 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.236 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 [2024-12-14 22:16:28.508599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:08.237 [2024-12-14 22:16:28.508638] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.237 [2024-12-14 22:16:28.583458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.237 [2024-12-14 22:16:28.606887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.237 [2024-12-14 22:16:28.606925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.237 [2024-12-14 22:16:28.606933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.237 [2024-12-14 22:16:28.606939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.237 [2024-12-14 22:16:28.606944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.237 [2024-12-14 22:16:28.608238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.237 [2024-12-14 22:16:28.608345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.237 [2024-12-14 22:16:28.608452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.237 [2024-12-14 22:16:28.608453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 [2024-12-14 22:16:28.740614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 Malloc0 00:07:08.237 [2024-12-14 22:16:28.807207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142097 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142097 /var/tmp/bdevperf.sock 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142097 ']' 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:08.237 { 00:07:08.237 "params": { 00:07:08.237 "name": "Nvme$subsystem", 00:07:08.237 "trtype": "$TEST_TRANSPORT", 00:07:08.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.237 "adrfam": "ipv4", 00:07:08.237 "trsvcid": "$NVMF_PORT", 00:07:08.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.237 "hdgst": ${hdgst:-false}, 00:07:08.237 "ddgst": ${ddgst:-false} 00:07:08.237 }, 00:07:08.237 "method": "bdev_nvme_attach_controller" 00:07:08.237 } 00:07:08.237 EOF 00:07:08.237 )") 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:08.237 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:08.237 "params": { 00:07:08.237 "name": "Nvme0", 00:07:08.237 "trtype": "tcp", 00:07:08.237 "traddr": "10.0.0.2", 00:07:08.237 "adrfam": "ipv4", 00:07:08.237 "trsvcid": "4420", 00:07:08.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.237 "hdgst": false, 00:07:08.237 "ddgst": false 00:07:08.237 }, 00:07:08.237 "method": "bdev_nvme_attach_controller" 00:07:08.237 }' 00:07:08.237 [2024-12-14 22:16:28.899752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:08.237 [2024-12-14 22:16:28.899795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142097 ] 00:07:08.237 [2024-12-14 22:16:28.975231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.237 [2024-12-14 22:16:28.997628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.499 Running I/O for 10 seconds... 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=104 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 104 -ge 100 ']' 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.499 [2024-12-14 22:16:29.252585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 [2024-12-14 22:16:29.252676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0590 is same with the state(6) to be set 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.499 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.500 [2024-12-14 22:16:29.262566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.500 [2024-12-14 22:16:29.262596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.500 [2024-12-14 22:16:29.262613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.500 [2024-12-14 22:16:29.262627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.500 [2024-12-14 22:16:29.262641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885d40 is same with the state(6) to be set 00:07:08.500 [2024-12-14 22:16:29.262679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.262988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.262996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.500 [2024-12-14 22:16:29.263217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.500 [2024-12-14 22:16:29.263223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.263619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.501 [2024-12-14 22:16:29.263625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.501 [2024-12-14 22:16:29.264551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:08.501 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.501 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:08.501 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:08.501 00:07:08.501 Latency(us) 00:07:08.501 [2024-12-14T21:16:29.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.501 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.501 Job: Nvme0n1 ended in about 0.11 seconds with error 00:07:08.501 Verification LBA range: start 0x0 length 0x400 00:07:08.501 Nvme0n1 : 0.11 1727.16 107.95 575.72 0.00 25667.41 1575.98 27337.87 00:07:08.501 [2024-12-14T21:16:29.385Z] =================================================================================================================== 00:07:08.501 [2024-12-14T21:16:29.385Z] Total : 1727.16 107.95 575.72 0.00 25667.41 1575.98 27337.87 00:07:08.501 [2024-12-14 22:16:29.266885] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.501 [2024-12-14 22:16:29.266911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x885d40 (9): Bad file descriptor 00:07:08.501 [2024-12-14 22:16:29.311039] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142097 00:07:09.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142097) - No such process 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:09.439 { 00:07:09.439 "params": { 00:07:09.439 "name": "Nvme$subsystem", 00:07:09.439 "trtype": "$TEST_TRANSPORT", 00:07:09.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.439 "adrfam": "ipv4", 00:07:09.439 "trsvcid": "$NVMF_PORT", 00:07:09.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.439 "hdgst": ${hdgst:-false}, 00:07:09.439 "ddgst": ${ddgst:-false} 00:07:09.439 }, 00:07:09.439 "method": "bdev_nvme_attach_controller" 00:07:09.439 } 00:07:09.439 EOF 00:07:09.439 )") 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:09.439 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:09.439 "params": { 00:07:09.439 "name": "Nvme0", 00:07:09.439 "trtype": "tcp", 00:07:09.439 "traddr": "10.0.0.2", 00:07:09.439 "adrfam": "ipv4", 00:07:09.439 "trsvcid": "4420", 00:07:09.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.440 "hdgst": false, 00:07:09.440 "ddgst": false 00:07:09.440 }, 00:07:09.440 "method": "bdev_nvme_attach_controller" 00:07:09.440 }' 00:07:09.440 [2024-12-14 22:16:30.318237] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:09.440 [2024-12-14 22:16:30.318286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142331 ] 00:07:09.699 [2024-12-14 22:16:30.393148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.699 [2024-12-14 22:16:30.415129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.958 Running I/O for 1 seconds... 00:07:10.897 1984.00 IOPS, 124.00 MiB/s 00:07:10.897 Latency(us) 00:07:10.897 [2024-12-14T21:16:31.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.897 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.897 Verification LBA range: start 0x0 length 0x400 00:07:10.897 Nvme0n1 : 1.01 2035.14 127.20 0.00 0.00 30956.22 7084.13 26838.55 00:07:10.897 [2024-12-14T21:16:31.781Z] =================================================================================================================== 00:07:10.897 [2024-12-14T21:16:31.781Z] Total : 2035.14 127.20 0.00 0.00 30956.22 7084.13 26838.55 00:07:10.897 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:10.898 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:10.898 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:10.898 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.157 rmmod nvme_tcp 00:07:11.157 rmmod nvme_fabrics 00:07:11.157 rmmod nvme_keyring 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 142054 ']' 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 142054 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 142054 ']' 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 142054 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.157 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142054 00:07:11.158 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:11.158 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:11.158 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142054' 00:07:11.158 killing process with pid 142054 00:07:11.158 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 142054 00:07:11.158 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 142054 00:07:11.417 [2024-12-14 22:16:32.050669] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.417 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.326 00:07:13.326 real 0m11.892s 00:07:13.326 user 0m17.423s 00:07:13.326 sys 0m5.432s 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.326 ************************************ 00:07:13.326 END TEST nvmf_host_management 00:07:13.326 ************************************ 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.326 22:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.586 ************************************ 00:07:13.586 START TEST nvmf_lvol 00:07:13.586 ************************************ 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.586 * Looking for test storage... 00:07:13.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.586 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.587 --rc genhtml_branch_coverage=1 00:07:13.587 --rc genhtml_function_coverage=1 00:07:13.587 --rc genhtml_legend=1 00:07:13.587 --rc geninfo_all_blocks=1 00:07:13.587 --rc geninfo_unexecuted_blocks=1 00:07:13.587 00:07:13.587 ' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.587 --rc genhtml_branch_coverage=1 00:07:13.587 --rc genhtml_function_coverage=1 00:07:13.587 --rc genhtml_legend=1 00:07:13.587 --rc geninfo_all_blocks=1 00:07:13.587 --rc geninfo_unexecuted_blocks=1 00:07:13.587 00:07:13.587 ' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.587 --rc genhtml_branch_coverage=1 00:07:13.587 --rc genhtml_function_coverage=1 00:07:13.587 --rc genhtml_legend=1 00:07:13.587 --rc geninfo_all_blocks=1 00:07:13.587 --rc geninfo_unexecuted_blocks=1 00:07:13.587 00:07:13.587 ' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.587 --rc genhtml_branch_coverage=1 00:07:13.587 --rc genhtml_function_coverage=1 00:07:13.587 --rc genhtml_legend=1 00:07:13.587 --rc geninfo_all_blocks=1 00:07:13.587 --rc geninfo_unexecuted_blocks=1 00:07:13.587 00:07:13.587 ' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.587 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.169 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.169 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:20.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:20.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:20.170 Found net devices under 0000:af:00.0: cvl_0_0 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:20.170 Found net devices under 0000:af:00.1: cvl_0_1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:07:20.170 00:07:20.170 --- 10.0.0.2 ping statistics --- 00:07:20.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.170 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:07:20.170 00:07:20.170 --- 10.0.0.1 ping statistics --- 00:07:20.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.170 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146251 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146251 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146251 ']' 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.170 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.170 [2024-12-14 22:16:40.473884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:20.170 [2024-12-14 22:16:40.473937] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.170 [2024-12-14 22:16:40.550085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.170 [2024-12-14 22:16:40.573273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.170 [2024-12-14 22:16:40.573309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.170 [2024-12-14 22:16:40.573316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.170 [2024-12-14 22:16:40.573323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.170 [2024-12-14 22:16:40.573328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.171 [2024-12-14 22:16:40.574500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.171 [2024-12-14 22:16:40.574538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.171 [2024-12-14 22:16:40.574539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:20.171 [2024-12-14 22:16:40.867547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.171 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:20.432 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:20.432 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:20.690 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:20.690 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:20.690 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:20.950 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0e27e548-316d-4be9-a432-50932cc239da 00:07:20.950 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e27e548-316d-4be9-a432-50932cc239da lvol 20 00:07:21.210 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=db3abd2a-d30b-4f54-9cbb-edef0e29857a 00:07:21.210 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.469 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db3abd2a-d30b-4f54-9cbb-edef0e29857a 00:07:21.729 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:21.729 [2024-12-14 22:16:42.531663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.729 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.988 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146591 00:07:21.988 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:21.988 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:22.925 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot db3abd2a-d30b-4f54-9cbb-edef0e29857a MY_SNAPSHOT 00:07:23.184 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=54aab9e2-2695-44dd-9a4c-1887709f4262 00:07:23.184 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize db3abd2a-d30b-4f54-9cbb-edef0e29857a 30 00:07:23.442 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 54aab9e2-2695-44dd-9a4c-1887709f4262 MY_CLONE 00:07:23.701 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=68584db4-3df4-44f2-88f1-adb59d90abc0 00:07:23.701 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 68584db4-3df4-44f2-88f1-adb59d90abc0 00:07:24.269 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146591 00:07:32.386 Initializing NVMe Controllers 00:07:32.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:32.386 Controller IO queue size 128, less than required. 00:07:32.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:32.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:32.386 Initialization complete. Launching workers. 00:07:32.386 ======================================================== 00:07:32.386 Latency(us) 00:07:32.386 Device Information : IOPS MiB/s Average min max 00:07:32.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12234.40 47.79 10464.77 1442.41 91771.58 00:07:32.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12077.90 47.18 10596.81 3408.01 45100.96 00:07:32.386 ======================================================== 00:07:32.386 Total : 24312.30 94.97 10530.36 1442.41 91771.58 00:07:32.386 00:07:32.386 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.645 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db3abd2a-d30b-4f54-9cbb-edef0e29857a 00:07:32.904 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e27e548-316d-4be9-a432-50932cc239da 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.163 rmmod nvme_tcp 00:07:33.163 rmmod nvme_fabrics 00:07:33.163 rmmod nvme_keyring 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146251 ']' 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146251 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146251 ']' 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146251 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146251 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146251' 00:07:33.163 killing process with pid 146251 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146251 00:07:33.163 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146251 00:07:33.422 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.422 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.422 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.422 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:33.422 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.423 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.962 00:07:35.962 real 0m22.043s 00:07:35.962 user 1m3.574s 00:07:35.962 sys 0m7.556s 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 ************************************ 00:07:35.962 END TEST nvmf_lvol 00:07:35.962 ************************************ 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 ************************************ 00:07:35.962 START TEST nvmf_lvs_grow 00:07:35.962 ************************************ 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.962 * Looking for test storage... 00:07:35.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:35.962 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.963 --rc genhtml_branch_coverage=1 00:07:35.963 --rc genhtml_function_coverage=1 00:07:35.963 --rc genhtml_legend=1 00:07:35.963 --rc geninfo_all_blocks=1 00:07:35.963 --rc geninfo_unexecuted_blocks=1 00:07:35.963 00:07:35.963 ' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.963 --rc genhtml_branch_coverage=1 00:07:35.963 --rc genhtml_function_coverage=1 00:07:35.963 --rc genhtml_legend=1 00:07:35.963 --rc geninfo_all_blocks=1 00:07:35.963 --rc geninfo_unexecuted_blocks=1 00:07:35.963 00:07:35.963 ' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.963 --rc genhtml_branch_coverage=1 00:07:35.963 --rc genhtml_function_coverage=1 00:07:35.963 --rc genhtml_legend=1 00:07:35.963 --rc geninfo_all_blocks=1 00:07:35.963 --rc geninfo_unexecuted_blocks=1 00:07:35.963 00:07:35.963 ' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.963 --rc genhtml_branch_coverage=1 00:07:35.963 --rc genhtml_function_coverage=1 00:07:35.963 --rc genhtml_legend=1 00:07:35.963 --rc geninfo_all_blocks=1 00:07:35.963 --rc geninfo_unexecuted_blocks=1 00:07:35.963 00:07:35.963 ' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.963 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:42.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:42.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:42.540 Found net devices under 0000:af:00.0: cvl_0_0 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:42.540 Found net devices under 0000:af:00.1: cvl_0_1 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.540 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:07:42.541 00:07:42.541 --- 10.0.0.2 ping statistics --- 00:07:42.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.541 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:07:42.541 00:07:42.541 --- 10.0.0.1 ping statistics --- 00:07:42.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.541 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=152005 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 152005 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 152005 ']' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.541 [2024-12-14 22:17:02.578801] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:42.541 [2024-12-14 22:17:02.578843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.541 [2024-12-14 22:17:02.652733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.541 [2024-12-14 22:17:02.674252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.541 [2024-12-14 22:17:02.674288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.541 [2024-12-14 22:17:02.674295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.541 [2024-12-14 22:17:02.674301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.541 [2024-12-14 22:17:02.674306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.541 [2024-12-14 22:17:02.674784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.541 [2024-12-14 22:17:02.966182] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.541 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.541 ************************************ 00:07:42.541 START TEST lvs_grow_clean 00:07:42.541 ************************************ 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.541 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.801 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 76a283cb-8804-4c0e-8c6d-661a9270d42f lvol 150 00:07:43.061 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa219f7e-0ad3-43a9-a457-61af958b0dcc 00:07:43.061 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.061 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.320 [2024-12-14 22:17:04.010741] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.320 [2024-12-14 22:17:04.010784] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.320 true 00:07:43.320 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:43.320 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.580 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.580 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.580 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa219f7e-0ad3-43a9-a457-61af958b0dcc 00:07:43.839 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:44.099 [2024-12-14 22:17:04.764978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152493 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152493 /var/tmp/bdevperf.sock 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152493 ']' 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.099 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:44.358 [2024-12-14 22:17:05.013177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:44.358 [2024-12-14 22:17:05.013227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152493 ] 00:07:44.358 [2024-12-14 22:17:05.087942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.358 [2024-12-14 22:17:05.110592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.358 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.358 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:44.358 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.926 Nvme0n1 00:07:44.926 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.926 [ 00:07:44.926 { 00:07:44.926 "name": "Nvme0n1", 00:07:44.926 "aliases": [ 00:07:44.926 "aa219f7e-0ad3-43a9-a457-61af958b0dcc" 00:07:44.926 ], 00:07:44.926 "product_name": "NVMe disk", 00:07:44.926 "block_size": 4096, 00:07:44.926 "num_blocks": 38912, 00:07:44.926 "uuid": "aa219f7e-0ad3-43a9-a457-61af958b0dcc", 00:07:44.926 "numa_id": 1, 00:07:44.926 "assigned_rate_limits": { 00:07:44.926 "rw_ios_per_sec": 0, 00:07:44.926 "rw_mbytes_per_sec": 0, 00:07:44.926 "r_mbytes_per_sec": 0, 00:07:44.926 "w_mbytes_per_sec": 0 00:07:44.926 }, 00:07:44.926 "claimed": false, 00:07:44.926 "zoned": false, 00:07:44.926 "supported_io_types": { 00:07:44.926 "read": true, 00:07:44.926 "write": true, 00:07:44.926 "unmap": true, 00:07:44.926 "flush": true, 00:07:44.926 "reset": true, 00:07:44.926 "nvme_admin": true, 00:07:44.926 "nvme_io": true, 00:07:44.926 "nvme_io_md": false, 00:07:44.926 "write_zeroes": true, 00:07:44.926 "zcopy": false, 00:07:44.926 "get_zone_info": false, 00:07:44.926 "zone_management": false, 00:07:44.926 "zone_append": false, 00:07:44.926 "compare": true, 00:07:44.926 "compare_and_write": true, 00:07:44.926 "abort": true, 00:07:44.926 "seek_hole": false, 00:07:44.926 "seek_data": false, 00:07:44.926 "copy": true, 00:07:44.926 "nvme_iov_md": false 00:07:44.926 }, 00:07:44.926 "memory_domains": [ 00:07:44.926 { 00:07:44.926 "dma_device_id": "system", 00:07:44.926 "dma_device_type": 1 00:07:44.926 } 00:07:44.926 ], 00:07:44.926 "driver_specific": { 00:07:44.926 "nvme": [ 00:07:44.926 { 00:07:44.926 "trid": { 00:07:44.926 "trtype": "TCP", 00:07:44.926 "adrfam": "IPv4", 00:07:44.926 "traddr": "10.0.0.2", 00:07:44.926 "trsvcid": "4420", 00:07:44.926 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.926 }, 00:07:44.926 "ctrlr_data": { 00:07:44.926 "cntlid": 1, 00:07:44.926 "vendor_id": "0x8086", 00:07:44.926 "model_number": "SPDK bdev Controller", 00:07:44.926 "serial_number": "SPDK0", 00:07:44.926 "firmware_revision": "25.01", 00:07:44.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.926 "oacs": { 00:07:44.926 "security": 0, 00:07:44.926 "format": 0, 00:07:44.926 "firmware": 0, 00:07:44.926 "ns_manage": 0 00:07:44.926 }, 00:07:44.926 "multi_ctrlr": true, 00:07:44.926 "ana_reporting": false 00:07:44.926 }, 00:07:44.926 "vs": { 00:07:44.926 "nvme_version": "1.3" 00:07:44.926 }, 00:07:44.926 "ns_data": { 00:07:44.926 "id": 1, 00:07:44.926 "can_share": true 00:07:44.926 } 00:07:44.927 } 00:07:44.927 ], 00:07:44.927 "mp_policy": "active_passive" 00:07:44.927 } 00:07:44.927 } 00:07:44.927 ] 00:07:44.927 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152505 00:07:44.927 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.927 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:45.186 Running I/O for 10 seconds... 00:07:46.122 Latency(us) 00:07:46.122 [2024-12-14T21:17:07.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.122 Nvme0n1 : 1.00 23450.00 91.60 0.00 0.00 0.00 0.00 0.00 00:07:46.122 [2024-12-14T21:17:07.006Z] =================================================================================================================== 00:07:46.122 [2024-12-14T21:17:07.006Z] Total : 23450.00 91.60 0.00 0.00 0.00 0.00 0.00 00:07:46.122 00:07:47.059 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:47.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.059 Nvme0n1 : 2.00 23552.00 92.00 0.00 0.00 0.00 0.00 0.00 00:07:47.059 [2024-12-14T21:17:07.943Z] =================================================================================================================== 00:07:47.059 [2024-12-14T21:17:07.943Z] Total : 23552.00 92.00 0.00 0.00 0.00 0.00 0.00 00:07:47.059 00:07:47.059 true 00:07:47.318 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:47.318 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:47.318 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.318 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.318 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152505 00:07:48.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.256 Nvme0n1 : 3.00 23596.00 92.17 0.00 0.00 0.00 0.00 0.00 00:07:48.256 [2024-12-14T21:17:09.140Z] =================================================================================================================== 00:07:48.256 [2024-12-14T21:17:09.140Z] Total : 23596.00 92.17 0.00 0.00 0.00 0.00 0.00 00:07:48.256 00:07:49.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.193 Nvme0n1 : 4.00 23682.25 92.51 0.00 0.00 0.00 0.00 0.00 00:07:49.193 [2024-12-14T21:17:10.077Z] =================================================================================================================== 00:07:49.193 [2024-12-14T21:17:10.077Z] Total : 23682.25 92.51 0.00 0.00 0.00 0.00 0.00 00:07:49.193 00:07:50.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.130 Nvme0n1 : 5.00 23685.20 92.52 0.00 0.00 0.00 0.00 0.00 00:07:50.130 [2024-12-14T21:17:11.014Z] =================================================================================================================== 00:07:50.130 [2024-12-14T21:17:11.014Z] Total : 23685.20 92.52 0.00 0.00 0.00 0.00 0.00 00:07:50.130 00:07:51.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.067 Nvme0n1 : 6.00 23720.67 92.66 0.00 0.00 0.00 0.00 0.00 00:07:51.067 [2024-12-14T21:17:11.951Z] =================================================================================================================== 00:07:51.067 [2024-12-14T21:17:11.951Z] Total : 23720.67 92.66 0.00 0.00 0.00 0.00 0.00 00:07:51.067 00:07:52.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.004 Nvme0n1 : 7.00 23753.86 92.79 0.00 0.00 0.00 0.00 0.00 00:07:52.004 [2024-12-14T21:17:12.888Z] =================================================================================================================== 00:07:52.004 [2024-12-14T21:17:12.888Z] Total : 23753.86 92.79 0.00 0.00 0.00 0.00 0.00 00:07:52.004 00:07:53.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.382 Nvme0n1 : 8.00 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:07:53.382 [2024-12-14T21:17:14.266Z] =================================================================================================================== 00:07:53.382 [2024-12-14T21:17:14.266Z] Total : 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:07:53.382 00:07:54.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.319 Nvme0n1 : 9.00 23789.11 92.93 0.00 0.00 0.00 0.00 0.00 00:07:54.319 [2024-12-14T21:17:15.203Z] =================================================================================================================== 00:07:54.319 [2024-12-14T21:17:15.203Z] Total : 23789.11 92.93 0.00 0.00 0.00 0.00 0.00 00:07:54.319 00:07:55.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.256 Nvme0n1 : 10.00 23810.60 93.01 0.00 0.00 0.00 0.00 0.00 00:07:55.256 [2024-12-14T21:17:16.140Z] =================================================================================================================== 00:07:55.256 [2024-12-14T21:17:16.140Z] Total : 23810.60 93.01 0.00 0.00 0.00 0.00 0.00 00:07:55.256 00:07:55.256 00:07:55.256 Latency(us) 00:07:55.256 [2024-12-14T21:17:16.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.256 Nvme0n1 : 10.01 23809.58 93.01 0.00 0.00 5372.95 2215.74 10735.42 00:07:55.256 [2024-12-14T21:17:16.140Z] =================================================================================================================== 00:07:55.256 [2024-12-14T21:17:16.140Z] Total : 23809.58 93.01 0.00 0.00 5372.95 2215.74 10735.42 00:07:55.256 { 00:07:55.256 "results": [ 00:07:55.256 { 00:07:55.256 "job": "Nvme0n1", 00:07:55.256 "core_mask": "0x2", 00:07:55.256 "workload": "randwrite", 00:07:55.256 "status": "finished", 00:07:55.256 "queue_depth": 128, 00:07:55.256 "io_size": 4096, 00:07:55.256 "runtime": 10.005803, 00:07:55.256 "iops": 23809.5832988117, 00:07:55.256 "mibps": 93.0061847609832, 00:07:55.256 "io_failed": 0, 00:07:55.256 "io_timeout": 0, 00:07:55.256 "avg_latency_us": 5372.945684822886, 00:07:55.256 "min_latency_us": 2215.7409523809524, 00:07:55.256 "max_latency_us": 10735.420952380953 00:07:55.256 } 00:07:55.256 ], 00:07:55.256 "core_count": 1 00:07:55.256 } 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152493 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152493 ']' 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152493 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152493 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152493' 00:07:55.256 killing process with pid 152493 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152493 00:07:55.256 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.256 00:07:55.256 Latency(us) 00:07:55.256 [2024-12-14T21:17:16.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.256 [2024-12-14T21:17:16.140Z] =================================================================================================================== 00:07:55.256 [2024-12-14T21:17:16.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.256 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152493 00:07:55.256 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.516 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.775 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.775 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:56.035 [2024-12-14 22:17:16.849576] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.035 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:56.294 request: 00:07:56.294 { 00:07:56.294 "uuid": "76a283cb-8804-4c0e-8c6d-661a9270d42f", 00:07:56.294 "method": "bdev_lvol_get_lvstores", 00:07:56.294 "req_id": 1 00:07:56.294 } 00:07:56.294 Got JSON-RPC error response 00:07:56.294 response: 00:07:56.294 { 00:07:56.294 "code": -19, 00:07:56.294 "message": "No such device" 00:07:56.294 } 00:07:56.294 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:56.294 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.294 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.294 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.294 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.554 aio_bdev 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa219f7e-0ad3-43a9-a457-61af958b0dcc 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=aa219f7e-0ad3-43a9-a457-61af958b0dcc 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.554 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:56.814 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa219f7e-0ad3-43a9-a457-61af958b0dcc -t 2000 00:07:56.814 [ 00:07:56.814 { 00:07:56.814 "name": "aa219f7e-0ad3-43a9-a457-61af958b0dcc", 00:07:56.814 "aliases": [ 00:07:56.814 "lvs/lvol" 00:07:56.814 ], 00:07:56.814 "product_name": "Logical Volume", 00:07:56.814 "block_size": 4096, 00:07:56.814 "num_blocks": 38912, 00:07:56.814 "uuid": "aa219f7e-0ad3-43a9-a457-61af958b0dcc", 00:07:56.814 "assigned_rate_limits": { 00:07:56.814 "rw_ios_per_sec": 0, 00:07:56.814 "rw_mbytes_per_sec": 0, 00:07:56.814 "r_mbytes_per_sec": 0, 00:07:56.814 "w_mbytes_per_sec": 0 00:07:56.814 }, 00:07:56.814 "claimed": false, 00:07:56.814 "zoned": false, 00:07:56.814 "supported_io_types": { 00:07:56.814 "read": true, 00:07:56.814 "write": true, 00:07:56.814 "unmap": true, 00:07:56.814 "flush": false, 00:07:56.814 "reset": true, 00:07:56.814 "nvme_admin": false, 00:07:56.814 "nvme_io": false, 00:07:56.814 "nvme_io_md": false, 00:07:56.814 "write_zeroes": true, 00:07:56.814 "zcopy": false, 00:07:56.814 "get_zone_info": false, 00:07:56.814 "zone_management": false, 00:07:56.814 "zone_append": false, 00:07:56.814 "compare": false, 00:07:56.814 "compare_and_write": false, 00:07:56.814 "abort": false, 00:07:56.814 "seek_hole": true, 00:07:56.814 "seek_data": true, 00:07:56.814 "copy": false, 00:07:56.814 "nvme_iov_md": false 00:07:56.814 }, 00:07:56.814 "driver_specific": { 00:07:56.814 "lvol": { 00:07:56.814 "lvol_store_uuid": "76a283cb-8804-4c0e-8c6d-661a9270d42f", 00:07:56.814 "base_bdev": "aio_bdev", 00:07:56.814 "thin_provision": false, 00:07:56.814 "num_allocated_clusters": 38, 00:07:56.814 "snapshot": false, 00:07:56.814 "clone": false, 00:07:56.814 "esnap_clone": false 00:07:56.814 } 00:07:56.814 } 00:07:56.814 } 00:07:56.814 ] 00:07:56.814 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:56.814 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:56.814 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:57.073 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:57.073 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:57.073 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:57.332 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:57.332 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa219f7e-0ad3-43a9-a457-61af958b0dcc 00:07:57.332 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76a283cb-8804-4c0e-8c6d-661a9270d42f 00:07:57.592 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.851 00:07:57.851 real 0m15.584s 00:07:57.851 user 0m15.107s 00:07:57.851 sys 0m1.506s 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.851 ************************************ 00:07:57.851 END TEST lvs_grow_clean 00:07:57.851 ************************************ 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.851 ************************************ 00:07:57.851 START TEST lvs_grow_dirty 00:07:57.851 ************************************ 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.851 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.111 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:58.111 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:58.370 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3020df16-8301-407e-98ab-ff49c0727290 00:07:58.370 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:07:58.370 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3020df16-8301-407e-98ab-ff49c0727290 lvol 150 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8656a8f6-b408-4423-a98b-72292f1c2cbc 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.630 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:58.890 [2024-12-14 22:17:19.646664] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:58.890 [2024-12-14 22:17:19.646711] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:58.890 true 00:07:58.890 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:58.890 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:07:59.149 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:59.149 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:59.409 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8656a8f6-b408-4423-a98b-72292f1c2cbc 00:07:59.409 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:59.668 [2024-12-14 22:17:20.412912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.668 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=155029 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 155029 /var/tmp/bdevperf.sock 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 155029 ']' 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.928 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:59.928 [2024-12-14 22:17:20.639501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:59.928 [2024-12-14 22:17:20.639548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155029 ] 00:07:59.928 [2024-12-14 22:17:20.711077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.928 [2024-12-14 22:17:20.733220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.187 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.187 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:00.187 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:00.446 Nvme0n1 00:08:00.446 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.706 [ 00:08:00.706 { 00:08:00.706 "name": "Nvme0n1", 00:08:00.706 "aliases": [ 00:08:00.706 "8656a8f6-b408-4423-a98b-72292f1c2cbc" 00:08:00.706 ], 00:08:00.706 "product_name": "NVMe disk", 00:08:00.706 "block_size": 4096, 00:08:00.706 "num_blocks": 38912, 00:08:00.706 "uuid": "8656a8f6-b408-4423-a98b-72292f1c2cbc", 00:08:00.706 "numa_id": 1, 00:08:00.706 "assigned_rate_limits": { 00:08:00.706 "rw_ios_per_sec": 0, 00:08:00.706 "rw_mbytes_per_sec": 0, 00:08:00.706 "r_mbytes_per_sec": 0, 00:08:00.706 "w_mbytes_per_sec": 0 00:08:00.706 }, 00:08:00.706 "claimed": false, 00:08:00.706 "zoned": false, 00:08:00.706 "supported_io_types": { 00:08:00.706 "read": true, 00:08:00.706 "write": true, 00:08:00.706 "unmap": true, 00:08:00.706 "flush": true, 00:08:00.706 "reset": true, 00:08:00.706 "nvme_admin": true, 00:08:00.706 "nvme_io": true, 00:08:00.706 "nvme_io_md": false, 00:08:00.706 "write_zeroes": true, 00:08:00.706 "zcopy": false, 00:08:00.706 "get_zone_info": false, 00:08:00.706 "zone_management": false, 00:08:00.706 "zone_append": false, 00:08:00.706 "compare": true, 00:08:00.706 "compare_and_write": true, 00:08:00.706 "abort": true, 00:08:00.706 "seek_hole": false, 00:08:00.706 "seek_data": false, 00:08:00.706 "copy": true, 00:08:00.706 "nvme_iov_md": false 00:08:00.706 }, 00:08:00.706 "memory_domains": [ 00:08:00.706 { 00:08:00.706 "dma_device_id": "system", 00:08:00.706 "dma_device_type": 1 00:08:00.706 } 00:08:00.706 ], 00:08:00.706 "driver_specific": { 00:08:00.706 "nvme": [ 00:08:00.706 { 00:08:00.706 "trid": { 00:08:00.706 "trtype": "TCP", 00:08:00.706 "adrfam": "IPv4", 00:08:00.706 "traddr": "10.0.0.2", 00:08:00.706 "trsvcid": "4420", 00:08:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.706 }, 00:08:00.706 "ctrlr_data": { 00:08:00.706 "cntlid": 1, 00:08:00.706 "vendor_id": "0x8086", 00:08:00.706 "model_number": "SPDK bdev Controller", 00:08:00.706 "serial_number": "SPDK0", 00:08:00.706 "firmware_revision": "25.01", 00:08:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.706 "oacs": { 00:08:00.706 "security": 0, 00:08:00.706 "format": 0, 00:08:00.706 "firmware": 0, 00:08:00.706 "ns_manage": 0 00:08:00.706 }, 00:08:00.706 "multi_ctrlr": true, 00:08:00.706 "ana_reporting": false 00:08:00.706 }, 00:08:00.706 "vs": { 00:08:00.706 "nvme_version": "1.3" 00:08:00.706 }, 00:08:00.706 "ns_data": { 00:08:00.706 "id": 1, 00:08:00.706 "can_share": true 00:08:00.706 } 00:08:00.706 } 00:08:00.706 ], 00:08:00.706 "mp_policy": "active_passive" 00:08:00.706 } 00:08:00.706 } 00:08:00.706 ] 00:08:00.706 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155252 00:08:00.706 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.706 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.706 Running I/O for 10 seconds... 00:08:01.644 Latency(us) 00:08:01.644 [2024-12-14T21:17:22.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.644 Nvme0n1 : 1.00 23351.00 91.21 0.00 0.00 0.00 0.00 0.00 00:08:01.644 [2024-12-14T21:17:22.528Z] =================================================================================================================== 00:08:01.644 [2024-12-14T21:17:22.528Z] Total : 23351.00 91.21 0.00 0.00 0.00 0.00 0.00 00:08:01.644 00:08:02.582 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:02.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.841 Nvme0n1 : 2.00 23576.00 92.09 0.00 0.00 0.00 0.00 0.00 00:08:02.841 [2024-12-14T21:17:23.725Z] =================================================================================================================== 00:08:02.841 [2024-12-14T21:17:23.725Z] Total : 23576.00 92.09 0.00 0.00 0.00 0.00 0.00 00:08:02.841 00:08:02.841 true 00:08:02.841 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:02.841 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:03.101 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:03.101 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:03.101 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155252 00:08:03.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.670 Nvme0n1 : 3.00 23625.67 92.29 0.00 0.00 0.00 0.00 0.00 00:08:03.670 [2024-12-14T21:17:24.554Z] =================================================================================================================== 00:08:03.670 [2024-12-14T21:17:24.554Z] Total : 23625.67 92.29 0.00 0.00 0.00 0.00 0.00 00:08:03.670 00:08:05.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.049 Nvme0n1 : 4.00 23683.00 92.51 0.00 0.00 0.00 0.00 0.00 00:08:05.049 [2024-12-14T21:17:25.933Z] =================================================================================================================== 00:08:05.049 [2024-12-14T21:17:25.933Z] Total : 23683.00 92.51 0.00 0.00 0.00 0.00 0.00 00:08:05.049 00:08:05.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.987 Nvme0n1 : 5.00 23732.80 92.71 0.00 0.00 0.00 0.00 0.00 00:08:05.987 [2024-12-14T21:17:26.871Z] =================================================================================================================== 00:08:05.987 [2024-12-14T21:17:26.871Z] Total : 23732.80 92.71 0.00 0.00 0.00 0.00 0.00 00:08:05.987 00:08:06.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.925 Nvme0n1 : 6.00 23778.33 92.88 0.00 0.00 0.00 0.00 0.00 00:08:06.925 [2024-12-14T21:17:27.810Z] =================================================================================================================== 00:08:06.926 [2024-12-14T21:17:27.810Z] Total : 23778.33 92.88 0.00 0.00 0.00 0.00 0.00 00:08:06.926 00:08:07.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.863 Nvme0n1 : 7.00 23797.86 92.96 0.00 0.00 0.00 0.00 0.00 00:08:07.863 [2024-12-14T21:17:28.747Z] =================================================================================================================== 00:08:07.863 [2024-12-14T21:17:28.747Z] Total : 23797.86 92.96 0.00 0.00 0.00 0.00 0.00 00:08:07.863 00:08:08.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.801 Nvme0n1 : 8.00 23828.00 93.08 0.00 0.00 0.00 0.00 0.00 00:08:08.801 [2024-12-14T21:17:29.685Z] =================================================================================================================== 00:08:08.801 [2024-12-14T21:17:29.685Z] Total : 23828.00 93.08 0.00 0.00 0.00 0.00 0.00 00:08:08.801 00:08:09.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.758 Nvme0n1 : 9.00 23824.11 93.06 0.00 0.00 0.00 0.00 0.00 00:08:09.758 [2024-12-14T21:17:30.642Z] =================================================================================================================== 00:08:09.758 [2024-12-14T21:17:30.642Z] Total : 23824.11 93.06 0.00 0.00 0.00 0.00 0.00 00:08:09.758 00:08:10.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.695 Nvme0n1 : 10.00 23848.50 93.16 0.00 0.00 0.00 0.00 0.00 00:08:10.695 [2024-12-14T21:17:31.579Z] =================================================================================================================== 00:08:10.695 [2024-12-14T21:17:31.579Z] Total : 23848.50 93.16 0.00 0.00 0.00 0.00 0.00 00:08:10.695 00:08:10.695 00:08:10.695 Latency(us) 00:08:10.695 [2024-12-14T21:17:31.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.695 Nvme0n1 : 10.01 23847.64 93.15 0.00 0.00 5364.34 3136.37 12670.29 00:08:10.695 [2024-12-14T21:17:31.579Z] =================================================================================================================== 00:08:10.695 [2024-12-14T21:17:31.579Z] Total : 23847.64 93.15 0.00 0.00 5364.34 3136.37 12670.29 00:08:10.695 { 00:08:10.695 "results": [ 00:08:10.695 { 00:08:10.695 "job": "Nvme0n1", 00:08:10.695 "core_mask": "0x2", 00:08:10.695 "workload": "randwrite", 00:08:10.695 "status": "finished", 00:08:10.695 "queue_depth": 128, 00:08:10.695 "io_size": 4096, 00:08:10.695 "runtime": 10.005728, 00:08:10.695 "iops": 23847.64007176689, 00:08:10.695 "mibps": 93.15484403033942, 00:08:10.695 "io_failed": 0, 00:08:10.695 "io_timeout": 0, 00:08:10.695 "avg_latency_us": 5364.336025598734, 00:08:10.695 "min_latency_us": 3136.365714285714, 00:08:10.695 "max_latency_us": 12670.293333333333 00:08:10.695 } 00:08:10.695 ], 00:08:10.695 "core_count": 1 00:08:10.695 } 00:08:10.695 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 155029 00:08:10.695 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 155029 ']' 00:08:10.695 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 155029 00:08:10.695 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155029 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155029' 00:08:10.954 killing process with pid 155029 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 155029 00:08:10.954 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.954 00:08:10.954 Latency(us) 00:08:10.954 [2024-12-14T21:17:31.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.954 [2024-12-14T21:17:31.838Z] =================================================================================================================== 00:08:10.954 [2024-12-14T21:17:31.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 155029 00:08:10.954 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.213 22:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.473 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:11.473 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 152005 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 152005 00:08:11.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 152005 Killed "${NVMF_APP[@]}" "$@" 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=157055 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 157055 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 157055 ']' 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.732 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.732 [2024-12-14 22:17:32.486258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:11.732 [2024-12-14 22:17:32.486304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.732 [2024-12-14 22:17:32.562842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.732 [2024-12-14 22:17:32.583091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.732 [2024-12-14 22:17:32.583124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.732 [2024-12-14 22:17:32.583132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.732 [2024-12-14 22:17:32.583138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.732 [2024-12-14 22:17:32.583142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.732 [2024-12-14 22:17:32.583658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.992 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.251 [2024-12-14 22:17:32.888395] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:12.251 [2024-12-14 22:17:32.888489] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:12.251 [2024-12-14 22:17:32.888515] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8656a8f6-b408-4423-a98b-72292f1c2cbc 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8656a8f6-b408-4423-a98b-72292f1c2cbc 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.251 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.251 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8656a8f6-b408-4423-a98b-72292f1c2cbc -t 2000 00:08:12.510 [ 00:08:12.510 { 00:08:12.510 "name": "8656a8f6-b408-4423-a98b-72292f1c2cbc", 00:08:12.510 "aliases": [ 00:08:12.510 "lvs/lvol" 00:08:12.510 ], 00:08:12.511 "product_name": "Logical Volume", 00:08:12.511 "block_size": 4096, 00:08:12.511 "num_blocks": 38912, 00:08:12.511 "uuid": "8656a8f6-b408-4423-a98b-72292f1c2cbc", 00:08:12.511 "assigned_rate_limits": { 00:08:12.511 "rw_ios_per_sec": 0, 00:08:12.511 "rw_mbytes_per_sec": 0, 00:08:12.511 "r_mbytes_per_sec": 0, 00:08:12.511 "w_mbytes_per_sec": 0 00:08:12.511 }, 00:08:12.511 "claimed": false, 00:08:12.511 "zoned": false, 00:08:12.511 "supported_io_types": { 00:08:12.511 "read": true, 00:08:12.511 "write": true, 00:08:12.511 "unmap": true, 00:08:12.511 "flush": false, 00:08:12.511 "reset": true, 00:08:12.511 "nvme_admin": false, 00:08:12.511 "nvme_io": false, 00:08:12.511 "nvme_io_md": false, 00:08:12.511 "write_zeroes": true, 00:08:12.511 "zcopy": false, 00:08:12.511 "get_zone_info": false, 00:08:12.511 "zone_management": false, 00:08:12.511 "zone_append": false, 00:08:12.511 "compare": false, 00:08:12.511 "compare_and_write": false, 00:08:12.511 "abort": false, 00:08:12.511 "seek_hole": true, 00:08:12.511 "seek_data": true, 00:08:12.511 "copy": false, 00:08:12.511 "nvme_iov_md": false 00:08:12.511 }, 00:08:12.511 "driver_specific": { 00:08:12.511 "lvol": { 00:08:12.511 "lvol_store_uuid": "3020df16-8301-407e-98ab-ff49c0727290", 00:08:12.511 "base_bdev": "aio_bdev", 00:08:12.511 "thin_provision": false, 00:08:12.511 "num_allocated_clusters": 38, 00:08:12.511 "snapshot": false, 00:08:12.511 "clone": false, 00:08:12.511 "esnap_clone": false 00:08:12.511 } 00:08:12.511 } 00:08:12.511 } 00:08:12.511 ] 00:08:12.511 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:12.511 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:12.511 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:12.770 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:12.770 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:12.770 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.031 [2024-12-14 22:17:33.833173] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:13.031 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:13.291 request: 00:08:13.291 { 00:08:13.291 "uuid": "3020df16-8301-407e-98ab-ff49c0727290", 00:08:13.291 "method": "bdev_lvol_get_lvstores", 00:08:13.291 "req_id": 1 00:08:13.291 } 00:08:13.291 Got JSON-RPC error response 00:08:13.291 response: 00:08:13.291 { 00:08:13.291 "code": -19, 00:08:13.291 "message": "No such device" 00:08:13.291 } 00:08:13.291 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:13.291 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.291 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.291 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.291 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.550 aio_bdev 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8656a8f6-b408-4423-a98b-72292f1c2cbc 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8656a8f6-b408-4423-a98b-72292f1c2cbc 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:13.550 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8656a8f6-b408-4423-a98b-72292f1c2cbc -t 2000 00:08:13.809 [ 00:08:13.809 { 00:08:13.809 "name": "8656a8f6-b408-4423-a98b-72292f1c2cbc", 00:08:13.809 "aliases": [ 00:08:13.809 "lvs/lvol" 00:08:13.809 ], 00:08:13.809 "product_name": "Logical Volume", 00:08:13.809 "block_size": 4096, 00:08:13.809 "num_blocks": 38912, 00:08:13.809 "uuid": "8656a8f6-b408-4423-a98b-72292f1c2cbc", 00:08:13.809 "assigned_rate_limits": { 00:08:13.809 "rw_ios_per_sec": 0, 00:08:13.809 "rw_mbytes_per_sec": 0, 00:08:13.809 "r_mbytes_per_sec": 0, 00:08:13.809 "w_mbytes_per_sec": 0 00:08:13.809 }, 00:08:13.809 "claimed": false, 00:08:13.809 "zoned": false, 00:08:13.809 "supported_io_types": { 00:08:13.809 "read": true, 00:08:13.809 "write": true, 00:08:13.809 "unmap": true, 00:08:13.809 "flush": false, 00:08:13.809 "reset": true, 00:08:13.809 "nvme_admin": false, 00:08:13.809 "nvme_io": false, 00:08:13.809 "nvme_io_md": false, 00:08:13.809 "write_zeroes": true, 00:08:13.809 "zcopy": false, 00:08:13.809 "get_zone_info": false, 00:08:13.809 "zone_management": false, 00:08:13.809 "zone_append": false, 00:08:13.809 "compare": false, 00:08:13.809 "compare_and_write": false, 00:08:13.809 "abort": false, 00:08:13.809 "seek_hole": true, 00:08:13.809 "seek_data": true, 00:08:13.809 "copy": false, 00:08:13.809 "nvme_iov_md": false 00:08:13.809 }, 00:08:13.809 "driver_specific": { 00:08:13.809 "lvol": { 00:08:13.809 "lvol_store_uuid": "3020df16-8301-407e-98ab-ff49c0727290", 00:08:13.809 "base_bdev": "aio_bdev", 00:08:13.809 "thin_provision": false, 00:08:13.809 "num_allocated_clusters": 38, 00:08:13.809 "snapshot": false, 00:08:13.809 "clone": false, 00:08:13.809 "esnap_clone": false 00:08:13.809 } 00:08:13.809 } 00:08:13.809 } 00:08:13.809 ] 00:08:13.809 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:13.809 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:13.809 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:14.068 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:14.068 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:14.068 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:14.327 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:14.327 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8656a8f6-b408-4423-a98b-72292f1c2cbc 00:08:14.327 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3020df16-8301-407e-98ab-ff49c0727290 00:08:14.586 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.845 00:08:14.845 real 0m16.878s 00:08:14.845 user 0m43.729s 00:08:14.845 sys 0m3.751s 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.845 ************************************ 00:08:14.845 END TEST lvs_grow_dirty 00:08:14.845 ************************************ 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:14.845 nvmf_trace.0 00:08:14.845 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.846 rmmod nvme_tcp 00:08:14.846 rmmod nvme_fabrics 00:08:14.846 rmmod nvme_keyring 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 157055 ']' 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 157055 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 157055 ']' 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 157055 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.846 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157055 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157055' 00:08:15.106 killing process with pid 157055 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 157055 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 157055 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.106 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.644 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.644 00:08:17.644 real 0m41.642s 00:08:17.644 user 1m4.393s 00:08:17.644 sys 0m10.125s 00:08:17.644 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.644 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.644 ************************************ 00:08:17.644 END TEST nvmf_lvs_grow 00:08:17.644 ************************************ 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.644 ************************************ 00:08:17.644 START TEST nvmf_bdev_io_wait 00:08:17.644 ************************************ 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:17.644 * Looking for test storage... 00:08:17.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:17.644 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.645 --rc genhtml_branch_coverage=1 00:08:17.645 --rc genhtml_function_coverage=1 00:08:17.645 --rc genhtml_legend=1 00:08:17.645 --rc geninfo_all_blocks=1 00:08:17.645 --rc geninfo_unexecuted_blocks=1 00:08:17.645 00:08:17.645 ' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.645 --rc genhtml_branch_coverage=1 00:08:17.645 --rc genhtml_function_coverage=1 00:08:17.645 --rc genhtml_legend=1 00:08:17.645 --rc geninfo_all_blocks=1 00:08:17.645 --rc geninfo_unexecuted_blocks=1 00:08:17.645 00:08:17.645 ' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.645 --rc genhtml_branch_coverage=1 00:08:17.645 --rc genhtml_function_coverage=1 00:08:17.645 --rc genhtml_legend=1 00:08:17.645 --rc geninfo_all_blocks=1 00:08:17.645 --rc geninfo_unexecuted_blocks=1 00:08:17.645 00:08:17.645 ' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.645 --rc genhtml_branch_coverage=1 00:08:17.645 --rc genhtml_function_coverage=1 00:08:17.645 --rc genhtml_legend=1 00:08:17.645 --rc geninfo_all_blocks=1 00:08:17.645 --rc geninfo_unexecuted_blocks=1 00:08:17.645 00:08:17.645 ' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.645 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.646 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.646 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.646 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.646 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.646 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:24.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:24.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:24.223 Found net devices under 0000:af:00.0: cvl_0_0 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:24.223 Found net devices under 0000:af:00.1: cvl_0_1 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.223 22:17:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.223 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.223 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.223 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:08:24.224 00:08:24.224 --- 10.0.0.2 ping statistics --- 00:08:24.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.224 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:24.224 00:08:24.224 --- 10.0.0.1 ping statistics --- 00:08:24.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.224 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161139 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161139 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161139 ']' 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 [2024-12-14 22:17:44.288985] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.224 [2024-12-14 22:17:44.289037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.224 [2024-12-14 22:17:44.365331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.224 [2024-12-14 22:17:44.389619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.224 [2024-12-14 22:17:44.389661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.224 [2024-12-14 22:17:44.389670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.224 [2024-12-14 22:17:44.389676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.224 [2024-12-14 22:17:44.389681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.224 [2024-12-14 22:17:44.391049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.224 [2024-12-14 22:17:44.391088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.224 [2024-12-14 22:17:44.391197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.224 [2024-12-14 22:17:44.391198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 [2024-12-14 22:17:44.563295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 Malloc0 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.224 [2024-12-14 22:17:44.614412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161285 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161287 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.224 { 00:08:24.224 "params": { 00:08:24.224 "name": "Nvme$subsystem", 00:08:24.224 "trtype": "$TEST_TRANSPORT", 00:08:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.224 "adrfam": "ipv4", 00:08:24.224 "trsvcid": "$NVMF_PORT", 00:08:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.224 "hdgst": ${hdgst:-false}, 00:08:24.224 "ddgst": ${ddgst:-false} 00:08:24.224 }, 00:08:24.224 "method": "bdev_nvme_attach_controller" 00:08:24.224 } 00:08:24.224 EOF 00:08:24.224 )") 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161289 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.224 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.225 { 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme$subsystem", 00:08:24.225 "trtype": "$TEST_TRANSPORT", 00:08:24.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "$NVMF_PORT", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.225 "hdgst": ${hdgst:-false}, 00:08:24.225 "ddgst": ${ddgst:-false} 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 } 00:08:24.225 EOF 00:08:24.225 )") 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161292 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.225 { 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme$subsystem", 00:08:24.225 "trtype": "$TEST_TRANSPORT", 00:08:24.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "$NVMF_PORT", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.225 "hdgst": ${hdgst:-false}, 00:08:24.225 "ddgst": ${ddgst:-false} 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 } 00:08:24.225 EOF 00:08:24.225 )") 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.225 { 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme$subsystem", 00:08:24.225 "trtype": "$TEST_TRANSPORT", 00:08:24.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "$NVMF_PORT", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.225 "hdgst": ${hdgst:-false}, 00:08:24.225 "ddgst": ${ddgst:-false} 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 } 00:08:24.225 EOF 00:08:24.225 )") 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161285 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme1", 00:08:24.225 "trtype": "tcp", 00:08:24.225 "traddr": "10.0.0.2", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "4420", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.225 "hdgst": false, 00:08:24.225 "ddgst": false 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 }' 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme1", 00:08:24.225 "trtype": "tcp", 00:08:24.225 "traddr": "10.0.0.2", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "4420", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.225 "hdgst": false, 00:08:24.225 "ddgst": false 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 }' 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme1", 00:08:24.225 "trtype": "tcp", 00:08:24.225 "traddr": "10.0.0.2", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "4420", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.225 "hdgst": false, 00:08:24.225 "ddgst": false 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 }' 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.225 22:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.225 "params": { 00:08:24.225 "name": "Nvme1", 00:08:24.225 "trtype": "tcp", 00:08:24.225 "traddr": "10.0.0.2", 00:08:24.225 "adrfam": "ipv4", 00:08:24.225 "trsvcid": "4420", 00:08:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.225 "hdgst": false, 00:08:24.225 "ddgst": false 00:08:24.225 }, 00:08:24.225 "method": "bdev_nvme_attach_controller" 00:08:24.225 }' 00:08:24.225 [2024-12-14 22:17:44.665631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.225 [2024-12-14 22:17:44.665672] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:24.225 [2024-12-14 22:17:44.667869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.225 [2024-12-14 22:17:44.667874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.225 [2024-12-14 22:17:44.667926] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-14 22:17:44.667927] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:24.225 --proc-type=auto ] 00:08:24.225 [2024-12-14 22:17:44.671067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.225 [2024-12-14 22:17:44.671109] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:24.225 [2024-12-14 22:17:44.819731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.225 [2024-12-14 22:17:44.834010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.225 [2024-12-14 22:17:44.913898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.225 [2024-12-14 22:17:44.931249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:24.225 [2024-12-14 22:17:45.010970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.225 [2024-12-14 22:17:45.034269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.225 [2024-12-14 22:17:45.071241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.225 [2024-12-14 22:17:45.087104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:24.484 Running I/O for 1 seconds... 00:08:24.484 Running I/O for 1 seconds... 00:08:24.484 Running I/O for 1 seconds... 00:08:24.484 Running I/O for 1 seconds... 00:08:25.421 12596.00 IOPS, 49.20 MiB/s 00:08:25.421 Latency(us) 00:08:25.421 [2024-12-14T21:17:46.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.421 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:25.421 Nvme1n1 : 1.01 12649.10 49.41 0.00 0.00 10085.67 5274.09 16727.28 00:08:25.421 [2024-12-14T21:17:46.305Z] =================================================================================================================== 00:08:25.421 [2024-12-14T21:17:46.305Z] Total : 12649.10 49.41 0.00 0.00 10085.67 5274.09 16727.28 00:08:25.421 242336.00 IOPS, 946.62 MiB/s 00:08:25.421 Latency(us) 00:08:25.421 [2024-12-14T21:17:46.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.421 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:25.421 Nvme1n1 : 1.00 241971.61 945.20 0.00 0.00 526.73 222.35 1505.77 00:08:25.421 [2024-12-14T21:17:46.305Z] =================================================================================================================== 00:08:25.421 [2024-12-14T21:17:46.305Z] Total : 241971.61 945.20 0.00 0.00 526.73 222.35 1505.77 00:08:25.421 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161287 00:08:25.680 9912.00 IOPS, 38.72 MiB/s 00:08:25.680 Latency(us) 00:08:25.680 [2024-12-14T21:17:46.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.680 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:25.680 Nvme1n1 : 1.01 9964.79 38.92 0.00 0.00 12794.07 4462.69 16103.13 00:08:25.680 [2024-12-14T21:17:46.564Z] =================================================================================================================== 00:08:25.680 [2024-12-14T21:17:46.564Z] Total : 9964.79 38.92 0.00 0.00 12794.07 4462.69 16103.13 00:08:25.680 11249.00 IOPS, 43.94 MiB/s 00:08:25.680 Latency(us) 00:08:25.680 [2024-12-14T21:17:46.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.680 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:25.680 Nvme1n1 : 1.01 11336.00 44.28 0.00 0.00 11263.88 3370.42 23218.47 00:08:25.680 [2024-12-14T21:17:46.564Z] =================================================================================================================== 00:08:25.680 [2024-12-14T21:17:46.564Z] Total : 11336.00 44.28 0.00 0.00 11263.88 3370.42 23218.47 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161289 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161292 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.680 rmmod nvme_tcp 00:08:25.680 rmmod nvme_fabrics 00:08:25.680 rmmod nvme_keyring 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:25.680 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161139 ']' 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161139 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161139 ']' 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161139 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.681 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161139 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161139' 00:08:25.941 killing process with pid 161139 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161139 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161139 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.941 22:17:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.479 00:08:28.479 real 0m10.766s 00:08:28.479 user 0m16.084s 00:08:28.479 sys 0m6.184s 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 ************************************ 00:08:28.479 END TEST nvmf_bdev_io_wait 00:08:28.479 ************************************ 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 ************************************ 00:08:28.479 START TEST nvmf_queue_depth 00:08:28.479 ************************************ 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.479 * Looking for test storage... 00:08:28.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.479 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.479 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.479 --rc genhtml_branch_coverage=1 00:08:28.479 --rc genhtml_function_coverage=1 00:08:28.479 --rc genhtml_legend=1 00:08:28.479 --rc geninfo_all_blocks=1 00:08:28.479 --rc geninfo_unexecuted_blocks=1 00:08:28.480 00:08:28.480 ' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.480 --rc genhtml_branch_coverage=1 00:08:28.480 --rc genhtml_function_coverage=1 00:08:28.480 --rc genhtml_legend=1 00:08:28.480 --rc geninfo_all_blocks=1 00:08:28.480 --rc geninfo_unexecuted_blocks=1 00:08:28.480 00:08:28.480 ' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.480 --rc genhtml_branch_coverage=1 00:08:28.480 --rc genhtml_function_coverage=1 00:08:28.480 --rc genhtml_legend=1 00:08:28.480 --rc geninfo_all_blocks=1 00:08:28.480 --rc geninfo_unexecuted_blocks=1 00:08:28.480 00:08:28.480 ' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.480 --rc genhtml_branch_coverage=1 00:08:28.480 --rc genhtml_function_coverage=1 00:08:28.480 --rc genhtml_legend=1 00:08:28.480 --rc geninfo_all_blocks=1 00:08:28.480 --rc geninfo_unexecuted_blocks=1 00:08:28.480 00:08:28.480 ' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.480 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:35.059 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:35.059 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.059 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:35.060 Found net devices under 0000:af:00.0: cvl_0_0 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:35.060 Found net devices under 0000:af:00.1: cvl_0_1 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.060 22:17:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:08:35.060 00:08:35.060 --- 10.0.0.2 ping statistics --- 00:08:35.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.060 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:35.060 00:08:35.060 --- 10.0.0.1 ping statistics --- 00:08:35.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.060 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=165052 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 165052 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165052 ']' 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 [2024-12-14 22:17:55.226563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:35.060 [2024-12-14 22:17:55.226608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.060 [2024-12-14 22:17:55.304300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.060 [2024-12-14 22:17:55.325141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.060 [2024-12-14 22:17:55.325179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.060 [2024-12-14 22:17:55.325186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.060 [2024-12-14 22:17:55.325191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.060 [2024-12-14 22:17:55.325197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.060 [2024-12-14 22:17:55.325691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.060 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 [2024-12-14 22:17:55.467245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 Malloc0 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 [2024-12-14 22:17:55.517295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165242 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165242 /var/tmp/bdevperf.sock 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165242 ']' 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 [2024-12-14 22:17:55.568145] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:35.061 [2024-12-14 22:17:55.568186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165242 ] 00:08:35.061 [2024-12-14 22:17:55.642823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.061 [2024-12-14 22:17:55.665899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 NVMe0n1 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.061 22:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.321 Running I/O for 10 seconds... 00:08:37.197 12217.00 IOPS, 47.72 MiB/s [2024-12-14T21:17:59.020Z] 12280.00 IOPS, 47.97 MiB/s [2024-12-14T21:18:00.401Z] 12463.00 IOPS, 48.68 MiB/s [2024-12-14T21:18:01.341Z] 12530.25 IOPS, 48.95 MiB/s [2024-12-14T21:18:02.279Z] 12555.20 IOPS, 49.04 MiB/s [2024-12-14T21:18:03.218Z] 12600.33 IOPS, 49.22 MiB/s [2024-12-14T21:18:04.157Z] 12615.71 IOPS, 49.28 MiB/s [2024-12-14T21:18:05.096Z] 12637.62 IOPS, 49.37 MiB/s [2024-12-14T21:18:06.034Z] 12610.78 IOPS, 49.26 MiB/s [2024-12-14T21:18:06.294Z] 12665.10 IOPS, 49.47 MiB/s 00:08:45.410 Latency(us) 00:08:45.410 [2024-12-14T21:18:06.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.410 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:45.410 Verification LBA range: start 0x0 length 0x4000 00:08:45.410 NVMe0n1 : 10.06 12688.33 49.56 0.00 0.00 80456.75 18599.74 52179.14 00:08:45.410 [2024-12-14T21:18:06.294Z] =================================================================================================================== 00:08:45.410 [2024-12-14T21:18:06.294Z] Total : 12688.33 49.56 0.00 0.00 80456.75 18599.74 52179.14 00:08:45.410 { 00:08:45.410 "results": [ 00:08:45.410 { 00:08:45.410 "job": "NVMe0n1", 00:08:45.410 "core_mask": "0x1", 00:08:45.410 "workload": "verify", 00:08:45.410 "status": "finished", 00:08:45.410 "verify_range": { 00:08:45.410 "start": 0, 00:08:45.410 "length": 16384 00:08:45.410 }, 00:08:45.410 "queue_depth": 1024, 00:08:45.410 "io_size": 4096, 00:08:45.410 "runtime": 10.062399, 00:08:45.410 "iops": 12688.326113881987, 00:08:45.410 "mibps": 49.56377388235151, 00:08:45.410 "io_failed": 0, 00:08:45.410 "io_timeout": 0, 00:08:45.410 "avg_latency_us": 80456.75289939671, 00:08:45.410 "min_latency_us": 18599.74095238095, 00:08:45.410 "max_latency_us": 52179.13904761905 00:08:45.410 } 00:08:45.410 ], 00:08:45.410 "core_count": 1 00:08:45.410 } 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165242 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165242 ']' 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165242 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165242 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165242' 00:08:45.410 killing process with pid 165242 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165242 00:08:45.410 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.410 00:08:45.410 Latency(us) 00:08:45.410 [2024-12-14T21:18:06.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.410 [2024-12-14T21:18:06.294Z] =================================================================================================================== 00:08:45.410 [2024-12-14T21:18:06.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165242 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.410 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.410 rmmod nvme_tcp 00:08:45.670 rmmod nvme_fabrics 00:08:45.670 rmmod nvme_keyring 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 165052 ']' 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 165052 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165052 ']' 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165052 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165052 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165052' 00:08:45.670 killing process with pid 165052 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165052 00:08:45.670 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165052 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.930 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.837 00:08:47.837 real 0m19.769s 00:08:47.837 user 0m22.912s 00:08:47.837 sys 0m6.116s 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 ************************************ 00:08:47.837 END TEST nvmf_queue_depth 00:08:47.837 ************************************ 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.837 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.098 ************************************ 00:08:48.098 START TEST nvmf_target_multipath 00:08:48.098 ************************************ 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:48.098 * Looking for test storage... 00:08:48.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.098 --rc genhtml_branch_coverage=1 00:08:48.098 --rc genhtml_function_coverage=1 00:08:48.098 --rc genhtml_legend=1 00:08:48.098 --rc geninfo_all_blocks=1 00:08:48.098 --rc geninfo_unexecuted_blocks=1 00:08:48.098 00:08:48.098 ' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.098 --rc genhtml_branch_coverage=1 00:08:48.098 --rc genhtml_function_coverage=1 00:08:48.098 --rc genhtml_legend=1 00:08:48.098 --rc geninfo_all_blocks=1 00:08:48.098 --rc geninfo_unexecuted_blocks=1 00:08:48.098 00:08:48.098 ' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.098 --rc genhtml_branch_coverage=1 00:08:48.098 --rc genhtml_function_coverage=1 00:08:48.098 --rc genhtml_legend=1 00:08:48.098 --rc geninfo_all_blocks=1 00:08:48.098 --rc geninfo_unexecuted_blocks=1 00:08:48.098 00:08:48.098 ' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.098 --rc genhtml_branch_coverage=1 00:08:48.098 --rc genhtml_function_coverage=1 00:08:48.098 --rc genhtml_legend=1 00:08:48.098 --rc geninfo_all_blocks=1 00:08:48.098 --rc geninfo_unexecuted_blocks=1 00:08:48.098 00:08:48.098 ' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.098 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.099 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.678 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.679 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.679 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:08:54.679 00:08:54.679 --- 10.0.0.2 ping statistics --- 00:08:54.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.679 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:08:54.679 00:08:54.679 --- 10.0.0.1 ping statistics --- 00:08:54.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.679 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:54.679 only one NIC for nvmf test 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.679 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.679 rmmod nvme_tcp 00:08:54.679 rmmod nvme_fabrics 00:08:54.679 rmmod nvme_keyring 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.680 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.589 00:08:56.589 real 0m8.429s 00:08:56.589 user 0m1.811s 00:08:56.589 sys 0m4.567s 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:56.589 ************************************ 00:08:56.589 END TEST nvmf_target_multipath 00:08:56.589 ************************************ 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.589 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.589 ************************************ 00:08:56.589 START TEST nvmf_zcopy 00:08:56.589 ************************************ 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.590 * Looking for test storage... 00:08:56.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.590 --rc genhtml_branch_coverage=1 00:08:56.590 --rc genhtml_function_coverage=1 00:08:56.590 --rc genhtml_legend=1 00:08:56.590 --rc geninfo_all_blocks=1 00:08:56.590 --rc geninfo_unexecuted_blocks=1 00:08:56.590 00:08:56.590 ' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.590 --rc genhtml_branch_coverage=1 00:08:56.590 --rc genhtml_function_coverage=1 00:08:56.590 --rc genhtml_legend=1 00:08:56.590 --rc geninfo_all_blocks=1 00:08:56.590 --rc geninfo_unexecuted_blocks=1 00:08:56.590 00:08:56.590 ' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.590 --rc genhtml_branch_coverage=1 00:08:56.590 --rc genhtml_function_coverage=1 00:08:56.590 --rc genhtml_legend=1 00:08:56.590 --rc geninfo_all_blocks=1 00:08:56.590 --rc geninfo_unexecuted_blocks=1 00:08:56.590 00:08:56.590 ' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.590 --rc genhtml_branch_coverage=1 00:08:56.590 --rc genhtml_function_coverage=1 00:08:56.590 --rc genhtml_legend=1 00:08:56.590 --rc geninfo_all_blocks=1 00:08:56.590 --rc geninfo_unexecuted_blocks=1 00:08:56.590 00:08:56.590 ' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.590 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.591 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.591 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.591 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.851 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.851 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.851 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.851 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.428 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.428 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.429 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.429 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:09:03.429 00:09:03.429 --- 10.0.0.2 ping statistics --- 00:09:03.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.429 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:03.429 00:09:03.429 --- 10.0.0.1 ping statistics --- 00:09:03.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.429 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.429 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=174498 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 174498 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 174498 ']' 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 [2024-12-14 22:18:23.501352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:03.430 [2024-12-14 22:18:23.501396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.430 [2024-12-14 22:18:23.577411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.430 [2024-12-14 22:18:23.598062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.430 [2024-12-14 22:18:23.598098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.430 [2024-12-14 22:18:23.598105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.430 [2024-12-14 22:18:23.598111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.430 [2024-12-14 22:18:23.598117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.430 [2024-12-14 22:18:23.598588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 [2024-12-14 22:18:23.740421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 [2024-12-14 22:18:23.760641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 malloc0 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.430 { 00:09:03.430 "params": { 00:09:03.430 "name": "Nvme$subsystem", 00:09:03.430 "trtype": "$TEST_TRANSPORT", 00:09:03.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.430 "adrfam": "ipv4", 00:09:03.430 "trsvcid": "$NVMF_PORT", 00:09:03.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.430 "hdgst": ${hdgst:-false}, 00:09:03.430 "ddgst": ${ddgst:-false} 00:09:03.430 }, 00:09:03.430 "method": "bdev_nvme_attach_controller" 00:09:03.430 } 00:09:03.430 EOF 00:09:03.430 )") 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.430 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.430 "params": { 00:09:03.430 "name": "Nvme1", 00:09:03.430 "trtype": "tcp", 00:09:03.430 "traddr": "10.0.0.2", 00:09:03.430 "adrfam": "ipv4", 00:09:03.430 "trsvcid": "4420", 00:09:03.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.430 "hdgst": false, 00:09:03.430 "ddgst": false 00:09:03.430 }, 00:09:03.430 "method": "bdev_nvme_attach_controller" 00:09:03.430 }' 00:09:03.430 [2024-12-14 22:18:23.840806] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:03.430 [2024-12-14 22:18:23.840859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174522 ] 00:09:03.430 [2024-12-14 22:18:23.914198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.430 [2024-12-14 22:18:23.936525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.430 Running I/O for 10 seconds... 00:09:05.315 8799.00 IOPS, 68.74 MiB/s [2024-12-14T21:18:27.577Z] 8892.50 IOPS, 69.47 MiB/s [2024-12-14T21:18:28.514Z] 8906.00 IOPS, 69.58 MiB/s [2024-12-14T21:18:29.451Z] 8911.00 IOPS, 69.62 MiB/s [2024-12-14T21:18:30.387Z] 8929.60 IOPS, 69.76 MiB/s [2024-12-14T21:18:31.323Z] 8926.33 IOPS, 69.74 MiB/s [2024-12-14T21:18:32.259Z] 8903.71 IOPS, 69.56 MiB/s [2024-12-14T21:18:33.195Z] 8906.62 IOPS, 69.58 MiB/s [2024-12-14T21:18:34.573Z] 8909.67 IOPS, 69.61 MiB/s [2024-12-14T21:18:34.573Z] 8911.10 IOPS, 69.62 MiB/s 00:09:13.689 Latency(us) 00:09:13.689 [2024-12-14T21:18:34.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:13.689 Verification LBA range: start 0x0 length 0x1000 00:09:13.689 Nvme1n1 : 10.01 8912.41 69.63 0.00 0.00 14319.36 1575.98 23218.47 00:09:13.689 [2024-12-14T21:18:34.573Z] =================================================================================================================== 00:09:13.689 [2024-12-14T21:18:34.573Z] Total : 8912.41 69.63 0.00 0.00 14319.36 1575.98 23218.47 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176303 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.689 { 00:09:13.689 "params": { 00:09:13.689 "name": "Nvme$subsystem", 00:09:13.689 "trtype": "$TEST_TRANSPORT", 00:09:13.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.689 "adrfam": "ipv4", 00:09:13.689 "trsvcid": "$NVMF_PORT", 00:09:13.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.689 "hdgst": ${hdgst:-false}, 00:09:13.689 "ddgst": ${ddgst:-false} 00:09:13.689 }, 00:09:13.689 "method": "bdev_nvme_attach_controller" 00:09:13.689 } 00:09:13.689 EOF 00:09:13.689 )") 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:13.689 [2024-12-14 22:18:34.368455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.368487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:13.689 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.689 "params": { 00:09:13.689 "name": "Nvme1", 00:09:13.689 "trtype": "tcp", 00:09:13.689 "traddr": "10.0.0.2", 00:09:13.689 "adrfam": "ipv4", 00:09:13.689 "trsvcid": "4420", 00:09:13.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.689 "hdgst": false, 00:09:13.689 "ddgst": false 00:09:13.689 }, 00:09:13.689 "method": "bdev_nvme_attach_controller" 00:09:13.689 }' 00:09:13.689 [2024-12-14 22:18:34.380443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.380455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 [2024-12-14 22:18:34.392471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.392480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 [2024-12-14 22:18:34.404038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:13.689 [2024-12-14 22:18:34.404077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176303 ] 00:09:13.689 [2024-12-14 22:18:34.404505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.404515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 [2024-12-14 22:18:34.416535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.416545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 [2024-12-14 22:18:34.428566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.689 [2024-12-14 22:18:34.428575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 [2024-12-14 22:18:34.440597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.440606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.452629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.452640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.464661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.464670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.476573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.690 [2024-12-14 22:18:34.476691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.476700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.488728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.488741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.499037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.690 [2024-12-14 22:18:34.500757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.500770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.512797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.512813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.524825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.524843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.536868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.536889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.548885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.548897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.560926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.560939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.690 [2024-12-14 22:18:34.572966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.690 [2024-12-14 22:18:34.572984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.584999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.585020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.597020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.597033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.609053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.609069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.621080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.621090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.633110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.633120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.645142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.645152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.657179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.657193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.669214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.669228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.681268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.681285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.722262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.722279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.733388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.733400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 Running I/O for 5 seconds... 00:09:13.949 [2024-12-14 22:18:34.750018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.750037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.764972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.764991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.778806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.778824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.792453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.792472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.806418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.806440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.949 [2024-12-14 22:18:34.820109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.949 [2024-12-14 22:18:34.820126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.834018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.834037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.847702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.847721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.861998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.862016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.875640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.875659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.889447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.889464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.903100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.903118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.916842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.916861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.930265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.930283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.943851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.943870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.208 [2024-12-14 22:18:34.957607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.208 [2024-12-14 22:18:34.957627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:34.971535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:34.971555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:34.985336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:34.985354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:34.999308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:34.999328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.013088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.013108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.026535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.026554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.040157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.040176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.054055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.054074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.067103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.067122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.209 [2024-12-14 22:18:35.080945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.209 [2024-12-14 22:18:35.080969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.095411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.095431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.111404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.111424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.125279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.125299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.138582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.138602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.152308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.152327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.165953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.165972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.179387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.179407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.193124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.193143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.206780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.206799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.220395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.220413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.233991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.234013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.247626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.247645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.261360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.261379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.275092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.275111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.288949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.288968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.302666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.302685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.316235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.316253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.329639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.329658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.467 [2024-12-14 22:18:35.343231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.467 [2024-12-14 22:18:35.343253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.357230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.357249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.370974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.370992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.384900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.384923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.398677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.398695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.412303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.412321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.425739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.425758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.439356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.439376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.453319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.453338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.467056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.467073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.481186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.481204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.494618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.494635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.508013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.508031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.521811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.521829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.535745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.535767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.549608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.549625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.725 [2024-12-14 22:18:35.563420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.725 [2024-12-14 22:18:35.563439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.726 [2024-12-14 22:18:35.577238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.726 [2024-12-14 22:18:35.577256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.726 [2024-12-14 22:18:35.590928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.726 [2024-12-14 22:18:35.590947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.726 [2024-12-14 22:18:35.604629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.726 [2024-12-14 22:18:35.604665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.618571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.618590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.632536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.632555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.645966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.645985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.659577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.659595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.673375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.673393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.687395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.687413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.701179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.701198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.715048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.715068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.728679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.728698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 16993.00 IOPS, 132.76 MiB/s [2024-12-14T21:18:35.868Z] [2024-12-14 22:18:35.742188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.742206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.756146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.756165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.770163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.770180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.783638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.783656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.797365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.797383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.811107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.811125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.824521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.824539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.838293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.838311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.851803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.851822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.984 [2024-12-14 22:18:35.865572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.984 [2024-12-14 22:18:35.865594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.879068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.879087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.892703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.892721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.905990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.906008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.920375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.920392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.935659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.935678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.949503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.949531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.243 [2024-12-14 22:18:35.962872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.243 [2024-12-14 22:18:35.962890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:35.976653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:35.976671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:35.990587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:35.990605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.003821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.003839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.017034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.017053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.030911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.030933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.044712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.044731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.058019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.058037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.071418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.071437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.085348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.085367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.098609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.098628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.112085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.112105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.244 [2024-12-14 22:18:36.126024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.244 [2024-12-14 22:18:36.126043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.139741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.139760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.153862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.153880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.164384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.164401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.178580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.178598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.192366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.192385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.205913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.205932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.219913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.219932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.233205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.233223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.246717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.246735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.260568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.260586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.274346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.274365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.288114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.288132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.301993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.302011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.315345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.315365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.329793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.329813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.343836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.343872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.357609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.357628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.371490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.371509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.503 [2024-12-14 22:18:36.385116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.503 [2024-12-14 22:18:36.385136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.399142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.399162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.412790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.412809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.427047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.427065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.440592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.440611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.453843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.453862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.467681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.467700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.481173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.481192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.495333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.495351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.508610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.508629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.522378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.522397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.536130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.536148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.549748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.549767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.563364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.563382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.577537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.577557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.591267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.591286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.605186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.605206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.618791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.618811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.632519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.632538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.774 [2024-12-14 22:18:36.646313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.774 [2024-12-14 22:18:36.646333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.660099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.660120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.673646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.673665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.686890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.686915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.700597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.700615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.715004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.715023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.730873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.730892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 17076.50 IOPS, 133.41 MiB/s [2024-12-14T21:18:36.917Z] [2024-12-14 22:18:36.744330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.744348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.758522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.758540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.772286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.772305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.786003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.786022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.799629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.799646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.813548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.813566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.827316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.827335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.841024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.841042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.855315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.855333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.869058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.869076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.882901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.882925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.896369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.896392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.033 [2024-12-14 22:18:36.910212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.033 [2024-12-14 22:18:36.910230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.924389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.924407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.938052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.938071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.951511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.951530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.965221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.965239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.978937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.978955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:36.992642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:36.992660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.006729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.006748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.020357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.020375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.033970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.033988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.047805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.047823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.061461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.061479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.074718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.074736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.088026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.088043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.101666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.101684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.115252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.115271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.129184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.129202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.142561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.142579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.156376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.156398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.292 [2024-12-14 22:18:37.169769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.292 [2024-12-14 22:18:37.169787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.183679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.183697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.197201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.197220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.210936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.210955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.224568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.224586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.238505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.238523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.252319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.252337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.265790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.265808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.279417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.279435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.293076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.293094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.306691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.306709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.320317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.320336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.333700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.333718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.347473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.347492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.361376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.361394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.374827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.374845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.388389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.388407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.401483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.401501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.415577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.415600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.552 [2024-12-14 22:18:37.429527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.552 [2024-12-14 22:18:37.429545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.442974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.442994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.456335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.456354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.469670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.469690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.483516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.483534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.497262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.497281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.510997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.511016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.524955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.524977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.538432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.538450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.552264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.552282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.565960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.565977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.579390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.579408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.593275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.593294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.606442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.606460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.620475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.620494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.633851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.633869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.647471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.647489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.660805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.660822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.811 [2024-12-14 22:18:37.674222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.811 [2024-12-14 22:18:37.674242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.812 [2024-12-14 22:18:37.688061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.812 [2024-12-14 22:18:37.688080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.701485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.701505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.715314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.715335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.729341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.729360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.742879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.742898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 17101.33 IOPS, 133.60 MiB/s [2024-12-14T21:18:37.955Z] [2024-12-14 22:18:37.756743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.756762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.770190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.770209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.784061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.784084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.797386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.797405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.810834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.810853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.824226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.824244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.838171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.838190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.851333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.851351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.864996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.865015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.878509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.878528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.891996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.892015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.905587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.905606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.919253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.919272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.932896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.932921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.071 [2024-12-14 22:18:37.945973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.071 [2024-12-14 22:18:37.945992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:37.960093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:37.960112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:37.973900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:37.973926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:37.987577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:37.987596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.001378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.001397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.014964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.014982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.028549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.028567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.042270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.042288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.055492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.055510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.069244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.069262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.082656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.082673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.096540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.096558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.110279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.110297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.124173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.124191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.137065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.137092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.150784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.150802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.164122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.164140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.177341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.177362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.190696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.190714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.331 [2024-12-14 22:18:38.204412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.331 [2024-12-14 22:18:38.204430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.218163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.218183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.231746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.231765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.245073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.245091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.258947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.258966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.272624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.272643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.286303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.286321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.300180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.300198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.313839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.313857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.327324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.327342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.340754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.340772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.354641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.354659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.367940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.367957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.381491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.381509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.395134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.395152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.409253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.409271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.419416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.590 [2024-12-14 22:18:38.419433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.590 [2024-12-14 22:18:38.433551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.591 [2024-12-14 22:18:38.433572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.591 [2024-12-14 22:18:38.446657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.591 [2024-12-14 22:18:38.446674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.591 [2024-12-14 22:18:38.460478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.591 [2024-12-14 22:18:38.460496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.591 [2024-12-14 22:18:38.473873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.591 [2024-12-14 22:18:38.473892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.487773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.487791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.501265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.501283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.515236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.515255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.528841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.528859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.542266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.542285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.555784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.555803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.569398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.569417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.582814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.582832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.596054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.596072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.609688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.609707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.623574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.623592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.637329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.637347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.650725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.650743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.664378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.664397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.677667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.677684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.691296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.691318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.705041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.705059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.718526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.718544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.850 [2024-12-14 22:18:38.732581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.850 [2024-12-14 22:18:38.732600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.746075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.746095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 17129.50 IOPS, 133.82 MiB/s [2024-12-14T21:18:38.993Z] [2024-12-14 22:18:38.759578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.759597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.773233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.773253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.787171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.787190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.801002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.801025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.814295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.814314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.827830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.827848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.841837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.841856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.855588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.855607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.869009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.869027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.882286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.882304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.895814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.895832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.909453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.909471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.923274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.923291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.937218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.937236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.950452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.950474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.963910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.963928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.977655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.977672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.109 [2024-12-14 22:18:38.991070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.109 [2024-12-14 22:18:38.991089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.004780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.004799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.018185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.018203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.032098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.032116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.045233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.045253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.058922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.058941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.072602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.072621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.086244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.086262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.100219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.100237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.113595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.113613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.127668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.127687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.141311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.141329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.155293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.155311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.168940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.168959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.182563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.182582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.196454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.196473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.210087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.210106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.223749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.223768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.237213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.237233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.369 [2024-12-14 22:18:39.251081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.369 [2024-12-14 22:18:39.251100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.264934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.264953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.278190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.278209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.291991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.292011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.305439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.305458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.319051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.319069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.333140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.333168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.346687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.346705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.360459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.360477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.374243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.374261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.387809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.387828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.401709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.401727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.415224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.415242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.428994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.429011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.442773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.442791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.456424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.456442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.470043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.470061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.483744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.483761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.497218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.497236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.628 [2024-12-14 22:18:39.510856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.628 [2024-12-14 22:18:39.510875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.524721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.524739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.538609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.538628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.552570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.552588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.565750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.565769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.579375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.579393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.592833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.592851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.606640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.606660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.620593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.620611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.634100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.634119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.647608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.647626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.661273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.661291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.674705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.674724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.688691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.688709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.702458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.702476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.716190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.716208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.729488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.729506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 [2024-12-14 22:18:39.743140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.743168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 17138.20 IOPS, 133.89 MiB/s [2024-12-14T21:18:39.772Z] [2024-12-14 22:18:39.755431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.755449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.888 00:09:18.888 Latency(us) 00:09:18.888 [2024-12-14T21:18:39.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.888 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:18.888 Nvme1n1 : 5.01 17142.25 133.92 0.00 0.00 7459.13 2949.12 17101.78 00:09:18.888 [2024-12-14T21:18:39.772Z] =================================================================================================================== 00:09:18.888 [2024-12-14T21:18:39.772Z] Total : 17142.25 133.92 0.00 0.00 7459.13 2949.12 17101.78 00:09:18.888 [2024-12-14 22:18:39.765269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.888 [2024-12-14 22:18:39.765283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.777311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.777329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.789344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.789359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.801369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.801386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.813405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.813426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.825431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.825447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.837458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.837472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.849491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.849506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.861523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.861538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.873552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.873561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.885585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.885597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.897617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.897627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 [2024-12-14 22:18:39.909657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.147 [2024-12-14 22:18:39.909676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176303) - No such process 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176303 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.147 delay0 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.147 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:19.406 [2024-12-14 22:18:40.054686] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.972 [2024-12-14 22:18:46.229551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04bc0 is same with the state(6) to be set 00:09:25.972 Initializing NVMe Controllers 00:09:25.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:25.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:25.972 Initialization complete. Launching workers. 00:09:25.972 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:09:25.972 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 33 00:09:25.972 success 230, unsuccessful 173, failed 0 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.972 rmmod nvme_tcp 00:09:25.972 rmmod nvme_fabrics 00:09:25.972 rmmod nvme_keyring 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 174498 ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 174498 ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174498' 00:09:25.972 killing process with pid 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 174498 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.972 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.880 00:09:27.880 real 0m31.352s 00:09:27.880 user 0m43.083s 00:09:27.880 sys 0m9.905s 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 ************************************ 00:09:27.880 END TEST nvmf_zcopy 00:09:27.880 ************************************ 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 ************************************ 00:09:27.880 START TEST nvmf_nmic 00:09:27.880 ************************************ 00:09:27.880 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.880 * Looking for test storage... 00:09:28.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.140 --rc genhtml_branch_coverage=1 00:09:28.140 --rc genhtml_function_coverage=1 00:09:28.140 --rc genhtml_legend=1 00:09:28.140 --rc geninfo_all_blocks=1 00:09:28.140 --rc geninfo_unexecuted_blocks=1 00:09:28.140 00:09:28.140 ' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.140 --rc genhtml_branch_coverage=1 00:09:28.140 --rc genhtml_function_coverage=1 00:09:28.140 --rc genhtml_legend=1 00:09:28.140 --rc geninfo_all_blocks=1 00:09:28.140 --rc geninfo_unexecuted_blocks=1 00:09:28.140 00:09:28.140 ' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.140 --rc genhtml_branch_coverage=1 00:09:28.140 --rc genhtml_function_coverage=1 00:09:28.140 --rc genhtml_legend=1 00:09:28.140 --rc geninfo_all_blocks=1 00:09:28.140 --rc geninfo_unexecuted_blocks=1 00:09:28.140 00:09:28.140 ' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.140 --rc genhtml_branch_coverage=1 00:09:28.140 --rc genhtml_function_coverage=1 00:09:28.140 --rc genhtml_legend=1 00:09:28.140 --rc geninfo_all_blocks=1 00:09:28.140 --rc geninfo_unexecuted_blocks=1 00:09:28.140 00:09:28.140 ' 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.140 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.141 22:18:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:34.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:34.717 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:34.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:34.718 Found net devices under 0000:af:00.0: cvl_0_0 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:34.718 Found net devices under 0000:af:00.1: cvl_0_1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:09:34.718 00:09:34.718 --- 10.0.0.2 ping statistics --- 00:09:34.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.718 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:34.718 00:09:34.718 --- 10.0.0.1 ping statistics --- 00:09:34.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.718 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181782 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181782 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181782 ']' 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.718 22:18:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.718 [2024-12-14 22:18:54.870379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:34.718 [2024-12-14 22:18:54.870429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.718 [2024-12-14 22:18:54.948429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.718 [2024-12-14 22:18:54.973713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.718 [2024-12-14 22:18:54.973752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.718 [2024-12-14 22:18:54.973760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.718 [2024-12-14 22:18:54.973765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.718 [2024-12-14 22:18:54.973771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.718 [2024-12-14 22:18:54.975096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.718 [2024-12-14 22:18:54.975134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.718 [2024-12-14 22:18:54.975239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.718 [2024-12-14 22:18:54.975240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.718 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 [2024-12-14 22:18:55.107818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 Malloc0 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 [2024-12-14 22:18:55.166436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:34.719 test case1: single bdev can't be used in multiple subsystems 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 [2024-12-14 22:18:55.194367] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:34.719 [2024-12-14 22:18:55.194386] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:34.719 [2024-12-14 22:18:55.194394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.719 request: 00:09:34.719 { 00:09:34.719 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:34.719 "namespace": { 00:09:34.719 "bdev_name": "Malloc0", 00:09:34.719 "no_auto_visible": false, 00:09:34.719 "hide_metadata": false 00:09:34.719 }, 00:09:34.719 "method": "nvmf_subsystem_add_ns", 00:09:34.719 "req_id": 1 00:09:34.719 } 00:09:34.719 Got JSON-RPC error response 00:09:34.719 response: 00:09:34.719 { 00:09:34.719 "code": -32602, 00:09:34.719 "message": "Invalid parameters" 00:09:34.719 } 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:34.719 Adding namespace failed - expected result. 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:34.719 test case2: host connect to nvmf target in multiple paths 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.719 [2024-12-14 22:18:55.206496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.719 22:18:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.655 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:36.592 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.592 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.592 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.592 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:36.592 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:39.123 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.123 [global] 00:09:39.123 thread=1 00:09:39.123 invalidate=1 00:09:39.123 rw=write 00:09:39.123 time_based=1 00:09:39.123 runtime=1 00:09:39.123 ioengine=libaio 00:09:39.123 direct=1 00:09:39.123 bs=4096 00:09:39.123 iodepth=1 00:09:39.123 norandommap=0 00:09:39.123 numjobs=1 00:09:39.123 00:09:39.123 verify_dump=1 00:09:39.123 verify_backlog=512 00:09:39.123 verify_state_save=0 00:09:39.123 do_verify=1 00:09:39.123 verify=crc32c-intel 00:09:39.123 [job0] 00:09:39.123 filename=/dev/nvme0n1 00:09:39.123 Could not set queue depth (nvme0n1) 00:09:39.123 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.123 fio-3.35 00:09:39.123 Starting 1 thread 00:09:40.501 00:09:40.501 job0: (groupid=0, jobs=1): err= 0: pid=182817: Sat Dec 14 22:19:01 2024 00:09:40.501 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:09:40.501 slat (nsec): min=9848, max=22599, avg=21358.13, stdev=2520.82 00:09:40.501 clat (usec): min=40891, max=41039, avg=40966.42, stdev=35.64 00:09:40.501 lat (usec): min=40913, max=41061, avg=40987.78, stdev=35.82 00:09:40.501 clat percentiles (usec): 00:09:40.501 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:40.501 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:40.501 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:40.501 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:40.502 | 99.99th=[41157] 00:09:40.502 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:40.502 slat (nsec): min=9226, max=45302, avg=10336.28, stdev=2566.23 00:09:40.502 clat (usec): min=105, max=430, avg=132.37, stdev=20.19 00:09:40.502 lat (usec): min=126, max=476, avg=142.70, stdev=21.96 00:09:40.502 clat percentiles (usec): 00:09:40.502 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 124], 00:09:40.502 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:09:40.502 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 155], 00:09:40.502 | 99.00th=[ 178], 99.50th=[ 221], 99.90th=[ 433], 99.95th=[ 433], 00:09:40.502 | 99.99th=[ 433] 00:09:40.502 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:40.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:40.502 lat (usec) : 250=95.33%, 500=0.37% 00:09:40.502 lat (msec) : 50=4.30% 00:09:40.502 cpu : usr=0.30%, sys=0.39%, ctx=535, majf=0, minf=1 00:09:40.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.502 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.502 00:09:40.502 Run status group 0 (all jobs): 00:09:40.502 READ: bw=90.5KiB/s (92.6kB/s), 90.5KiB/s-90.5KiB/s (92.6kB/s-92.6kB/s), io=92.0KiB (94.2kB), run=1017-1017msec 00:09:40.502 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:09:40.502 00:09:40.502 Disk stats (read/write): 00:09:40.502 nvme0n1: ios=70/512, merge=0/0, ticks=842/66, in_queue=908, util=91.38% 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.502 rmmod nvme_tcp 00:09:40.502 rmmod nvme_fabrics 00:09:40.502 rmmod nvme_keyring 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181782 ']' 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181782 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181782 ']' 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181782 00:09:40.502 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181782 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181782' 00:09:40.761 killing process with pid 181782 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181782 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181782 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.761 22:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.296 00:09:43.296 real 0m15.010s 00:09:43.296 user 0m33.741s 00:09:43.296 sys 0m5.418s 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.296 ************************************ 00:09:43.296 END TEST nvmf_nmic 00:09:43.296 ************************************ 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.296 ************************************ 00:09:43.296 START TEST nvmf_fio_target 00:09:43.296 ************************************ 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.296 * Looking for test storage... 00:09:43.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.296 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.297 --rc genhtml_branch_coverage=1 00:09:43.297 --rc genhtml_function_coverage=1 00:09:43.297 --rc genhtml_legend=1 00:09:43.297 --rc geninfo_all_blocks=1 00:09:43.297 --rc geninfo_unexecuted_blocks=1 00:09:43.297 00:09:43.297 ' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.297 --rc genhtml_branch_coverage=1 00:09:43.297 --rc genhtml_function_coverage=1 00:09:43.297 --rc genhtml_legend=1 00:09:43.297 --rc geninfo_all_blocks=1 00:09:43.297 --rc geninfo_unexecuted_blocks=1 00:09:43.297 00:09:43.297 ' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.297 --rc genhtml_branch_coverage=1 00:09:43.297 --rc genhtml_function_coverage=1 00:09:43.297 --rc genhtml_legend=1 00:09:43.297 --rc geninfo_all_blocks=1 00:09:43.297 --rc geninfo_unexecuted_blocks=1 00:09:43.297 00:09:43.297 ' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.297 --rc genhtml_branch_coverage=1 00:09:43.297 --rc genhtml_function_coverage=1 00:09:43.297 --rc genhtml_legend=1 00:09:43.297 --rc geninfo_all_blocks=1 00:09:43.297 --rc geninfo_unexecuted_blocks=1 00:09:43.297 00:09:43.297 ' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.297 22:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.869 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.869 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.869 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:09:49.869 00:09:49.869 --- 10.0.0.2 ping statistics --- 00:09:49.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.870 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:09:49.870 00:09:49.870 --- 10.0.0.1 ping statistics --- 00:09:49.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.870 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186536 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186536 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186536 ']' 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.870 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.870 [2024-12-14 22:19:09.981614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:49.870 [2024-12-14 22:19:09.981655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.870 [2024-12-14 22:19:10.055556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.870 [2024-12-14 22:19:10.079752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.870 [2024-12-14 22:19:10.079791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.870 [2024-12-14 22:19:10.079799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.870 [2024-12-14 22:19:10.079804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.870 [2024-12-14 22:19:10.079825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.870 [2024-12-14 22:19:10.081252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.870 [2024-12-14 22:19:10.081288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.870 [2024-12-14 22:19:10.081395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.870 [2024-12-14 22:19:10.081396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.870 [2024-12-14 22:19:10.378448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:49.870 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.129 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:50.129 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.389 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.389 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.648 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:50.648 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:50.648 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.906 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:50.906 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.165 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:51.165 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.424 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:51.424 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:51.683 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.683 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.683 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.942 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.942 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.201 22:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.459 [2024-12-14 22:19:13.086900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.459 22:19:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:52.459 22:19:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:52.717 22:19:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:54.094 22:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:56.004 22:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.004 [global] 00:09:56.004 thread=1 00:09:56.004 invalidate=1 00:09:56.004 rw=write 00:09:56.004 time_based=1 00:09:56.004 runtime=1 00:09:56.004 ioengine=libaio 00:09:56.004 direct=1 00:09:56.004 bs=4096 00:09:56.004 iodepth=1 00:09:56.004 norandommap=0 00:09:56.004 numjobs=1 00:09:56.004 00:09:56.004 verify_dump=1 00:09:56.004 verify_backlog=512 00:09:56.004 verify_state_save=0 00:09:56.004 do_verify=1 00:09:56.004 verify=crc32c-intel 00:09:56.004 [job0] 00:09:56.004 filename=/dev/nvme0n1 00:09:56.004 [job1] 00:09:56.004 filename=/dev/nvme0n2 00:09:56.004 [job2] 00:09:56.004 filename=/dev/nvme0n3 00:09:56.004 [job3] 00:09:56.004 filename=/dev/nvme0n4 00:09:56.004 Could not set queue depth (nvme0n1) 00:09:56.004 Could not set queue depth (nvme0n2) 00:09:56.004 Could not set queue depth (nvme0n3) 00:09:56.004 Could not set queue depth (nvme0n4) 00:09:56.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 fio-3.35 00:09:56.262 Starting 4 threads 00:09:57.638 00:09:57.638 job0: (groupid=0, jobs=1): err= 0: pid=187856: Sat Dec 14 22:19:18 2024 00:09:57.638 read: IOPS=1002, BW=4012KiB/s (4108kB/s)(4156KiB/1036msec) 00:09:57.638 slat (nsec): min=6359, max=23493, avg=7489.94, stdev=2038.23 00:09:57.638 clat (usec): min=166, max=41974, avg=739.98, stdev=4554.06 00:09:57.638 lat (usec): min=173, max=41996, avg=747.47, stdev=4555.03 00:09:57.638 clat percentiles (usec): 00:09:57.638 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:57.638 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 221], 00:09:57.638 | 70.00th=[ 231], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:09:57.638 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:57.638 | 99.99th=[42206] 00:09:57.638 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:09:57.638 slat (nsec): min=8601, max=70664, avg=10480.60, stdev=2413.50 00:09:57.639 clat (usec): min=111, max=351, avg=153.95, stdev=28.91 00:09:57.639 lat (usec): min=120, max=406, avg=164.43, stdev=29.54 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 129], 00:09:57.639 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 155], 00:09:57.639 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 202], 00:09:57.639 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 334], 99.95th=[ 351], 00:09:57.639 | 99.99th=[ 351] 00:09:57.639 bw ( KiB/s): min= 4096, max= 8192, per=38.85%, avg=6144.00, stdev=2896.31, samples=2 00:09:57.639 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:57.639 lat (usec) : 250=90.02%, 500=9.44% 00:09:57.639 lat (msec) : 10=0.04%, 50=0.50% 00:09:57.639 cpu : usr=1.26%, sys=2.32%, ctx=2576, majf=0, minf=1 00:09:57.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 issued rwts: total=1039,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.639 job1: (groupid=0, jobs=1): err= 0: pid=187857: Sat Dec 14 22:19:18 2024 00:09:57.639 read: IOPS=27, BW=108KiB/s (111kB/s)(112KiB/1035msec) 00:09:57.639 slat (nsec): min=7101, max=23016, avg=19497.82, stdev=5595.03 00:09:57.639 clat (usec): min=269, max=43446, avg=33900.23, stdev=15931.58 00:09:57.639 lat (usec): min=277, max=43467, avg=33919.72, stdev=15934.35 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[40633], 00:09:57.639 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:57.639 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:57.639 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:57.639 | 99.99th=[43254] 00:09:57.639 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:09:57.639 slat (nsec): min=9042, max=41001, avg=10287.45, stdev=1722.00 00:09:57.639 clat (usec): min=130, max=350, avg=154.40, stdev=13.90 00:09:57.639 lat (usec): min=139, max=391, avg=164.68, stdev=14.80 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:57.639 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:09:57.639 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:09:57.639 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 351], 99.95th=[ 351], 00:09:57.639 | 99.99th=[ 351] 00:09:57.639 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.639 lat (usec) : 250=94.63%, 500=0.93%, 750=0.19% 00:09:57.639 lat (msec) : 50=4.26% 00:09:57.639 cpu : usr=0.10%, sys=0.68%, ctx=540, majf=0, minf=2 00:09:57.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.639 job2: (groupid=0, jobs=1): err= 0: pid=187858: Sat Dec 14 22:19:18 2024 00:09:57.639 read: IOPS=62, BW=250KiB/s (256kB/s)(256KiB/1024msec) 00:09:57.639 slat (nsec): min=6633, max=29988, avg=12776.73, stdev=7547.42 00:09:57.639 clat (usec): min=217, max=41970, avg=14392.12, stdev=19629.89 00:09:57.639 lat (usec): min=224, max=41992, avg=14404.89, stdev=19636.80 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 249], 00:09:57.639 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 355], 00:09:57.639 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:09:57.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:57.639 | 99.99th=[42206] 00:09:57.639 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:57.639 slat (nsec): min=9346, max=40455, avg=10391.11, stdev=1656.24 00:09:57.639 clat (usec): min=139, max=379, avg=186.77, stdev=20.82 00:09:57.639 lat (usec): min=149, max=420, avg=197.17, stdev=21.41 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:57.639 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:57.639 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:09:57.639 | 99.00th=[ 249], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 379], 00:09:57.639 | 99.99th=[ 379] 00:09:57.639 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.639 lat (usec) : 250=90.28%, 500=5.90% 00:09:57.639 lat (msec) : 50=3.82% 00:09:57.639 cpu : usr=0.59%, sys=0.29%, ctx=576, majf=0, minf=1 00:09:57.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 issued rwts: total=64,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.639 job3: (groupid=0, jobs=1): err= 0: pid=187859: Sat Dec 14 22:19:18 2024 00:09:57.639 read: IOPS=1067, BW=4271KiB/s (4373kB/s)(4356KiB/1020msec) 00:09:57.639 slat (nsec): min=6915, max=41845, avg=8424.18, stdev=2484.84 00:09:57.639 clat (usec): min=150, max=41978, avg=680.60, stdev=4291.70 00:09:57.639 lat (usec): min=171, max=42001, avg=689.02, stdev=4292.49 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:09:57.639 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:09:57.639 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 269], 95.00th=[ 285], 00:09:57.639 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:57.639 | 99.99th=[42206] 00:09:57.639 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:09:57.639 slat (nsec): min=10094, max=50982, avg=11509.37, stdev=2097.15 00:09:57.639 clat (usec): min=122, max=345, avg=159.20, stdev=19.76 00:09:57.639 lat (usec): min=133, max=389, avg=170.71, stdev=20.42 00:09:57.639 clat percentiles (usec): 00:09:57.639 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:09:57.639 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:09:57.639 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:09:57.639 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 289], 99.95th=[ 347], 00:09:57.639 | 99.99th=[ 347] 00:09:57.639 bw ( KiB/s): min= 4096, max= 8192, per=38.85%, avg=6144.00, stdev=2896.31, samples=2 00:09:57.639 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:57.639 lat (usec) : 250=93.94%, 500=5.52%, 750=0.04% 00:09:57.639 lat (msec) : 10=0.04%, 50=0.46% 00:09:57.639 cpu : usr=1.96%, sys=4.22%, ctx=2625, majf=0, minf=1 00:09:57.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.639 issued rwts: total=1089,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.639 00:09:57.639 Run status group 0 (all jobs): 00:09:57.639 READ: bw=8571KiB/s (8777kB/s), 108KiB/s-4271KiB/s (111kB/s-4373kB/s), io=8880KiB (9093kB), run=1020-1036msec 00:09:57.639 WRITE: bw=15.4MiB/s (16.2MB/s), 1979KiB/s-6024KiB/s (2026kB/s-6168kB/s), io=16.0MiB (16.8MB), run=1020-1036msec 00:09:57.639 00:09:57.639 Disk stats (read/write): 00:09:57.639 nvme0n1: ios=1081/1536, merge=0/0, ticks=580/222, in_queue=802, util=86.47% 00:09:57.639 nvme0n2: ios=38/512, merge=0/0, ticks=766/79, in_queue=845, util=87.17% 00:09:57.639 nvme0n3: ios=59/512, merge=0/0, ticks=715/92, in_queue=807, util=89.02% 00:09:57.639 nvme0n4: ios=1080/1536, merge=0/0, ticks=527/232, in_queue=759, util=89.67% 00:09:57.639 22:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:57.639 [global] 00:09:57.639 thread=1 00:09:57.639 invalidate=1 00:09:57.639 rw=randwrite 00:09:57.639 time_based=1 00:09:57.639 runtime=1 00:09:57.639 ioengine=libaio 00:09:57.639 direct=1 00:09:57.639 bs=4096 00:09:57.639 iodepth=1 00:09:57.639 norandommap=0 00:09:57.639 numjobs=1 00:09:57.639 00:09:57.639 verify_dump=1 00:09:57.639 verify_backlog=512 00:09:57.639 verify_state_save=0 00:09:57.639 do_verify=1 00:09:57.639 verify=crc32c-intel 00:09:57.639 [job0] 00:09:57.639 filename=/dev/nvme0n1 00:09:57.639 [job1] 00:09:57.639 filename=/dev/nvme0n2 00:09:57.639 [job2] 00:09:57.639 filename=/dev/nvme0n3 00:09:57.639 [job3] 00:09:57.639 filename=/dev/nvme0n4 00:09:57.639 Could not set queue depth (nvme0n1) 00:09:57.639 Could not set queue depth (nvme0n2) 00:09:57.639 Could not set queue depth (nvme0n3) 00:09:57.639 Could not set queue depth (nvme0n4) 00:09:57.897 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.897 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.897 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.897 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.897 fio-3.35 00:09:57.897 Starting 4 threads 00:09:59.280 00:09:59.280 job0: (groupid=0, jobs=1): err= 0: pid=188223: Sat Dec 14 22:19:19 2024 00:09:59.280 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:59.280 slat (nsec): min=5689, max=23411, avg=7396.66, stdev=1637.85 00:09:59.280 clat (usec): min=168, max=41934, avg=442.92, stdev=2980.84 00:09:59.280 lat (usec): min=175, max=41957, avg=450.32, stdev=2981.26 00:09:59.280 clat percentiles (usec): 00:09:59.280 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:09:59.280 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 223], 00:09:59.280 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 260], 00:09:59.280 | 99.00th=[ 281], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:59.280 | 99.99th=[41681] 00:09:59.280 write: IOPS=1694, BW=6777KiB/s (6940kB/s)(6784KiB/1001msec); 0 zone resets 00:09:59.280 slat (nsec): min=8663, max=36382, avg=10224.92, stdev=1543.09 00:09:59.280 clat (usec): min=109, max=373, avg=167.18, stdev=34.50 00:09:59.280 lat (usec): min=119, max=409, avg=177.40, stdev=34.70 00:09:59.280 clat percentiles (usec): 00:09:59.280 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:09:59.280 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 165], 00:09:59.280 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 219], 95.00th=[ 245], 00:09:59.280 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 375], 00:09:59.280 | 99.99th=[ 375] 00:09:59.280 bw ( KiB/s): min= 8192, max= 8192, per=36.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:59.280 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:59.280 lat (usec) : 250=92.17%, 500=7.52% 00:09:59.280 lat (msec) : 2=0.03%, 20=0.03%, 50=0.25% 00:09:59.280 cpu : usr=1.60%, sys=2.90%, ctx=3233, majf=0, minf=1 00:09:59.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.280 issued rwts: total=1536,1696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.280 job1: (groupid=0, jobs=1): err= 0: pid=188224: Sat Dec 14 22:19:19 2024 00:09:59.280 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:09:59.280 slat (nsec): min=22151, max=32164, avg=23255.64, stdev=2008.68 00:09:59.280 clat (usec): min=40847, max=42361, avg=41259.72, stdev=482.96 00:09:59.280 lat (usec): min=40870, max=42393, avg=41282.98, stdev=483.94 00:09:59.280 clat percentiles (usec): 00:09:59.280 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:59.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.280 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:59.280 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.280 | 99.99th=[42206] 00:09:59.280 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:59.280 slat (nsec): min=9269, max=50541, avg=10471.48, stdev=2096.66 00:09:59.280 clat (usec): min=127, max=274, avg=189.13, stdev=17.16 00:09:59.280 lat (usec): min=137, max=305, avg=199.60, stdev=17.42 00:09:59.280 clat percentiles (usec): 00:09:59.280 | 1.00th=[ 137], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:09:59.281 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:59.281 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:09:59.281 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 277], 99.95th=[ 277], 00:09:59.281 | 99.99th=[ 277] 00:09:59.281 bw ( KiB/s): min= 4096, max= 4096, per=18.06%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.281 lat (usec) : 250=95.51%, 500=0.37% 00:09:59.281 lat (msec) : 50=4.12% 00:09:59.281 cpu : usr=0.20%, sys=0.49%, ctx=537, majf=0, minf=1 00:09:59.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.281 job2: (groupid=0, jobs=1): err= 0: pid=188225: Sat Dec 14 22:19:19 2024 00:09:59.281 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:59.281 slat (nsec): min=6736, max=16426, avg=7577.35, stdev=581.39 00:09:59.281 clat (usec): min=153, max=1283, avg=204.43, stdev=31.40 00:09:59.281 lat (usec): min=161, max=1291, avg=212.01, stdev=31.41 00:09:59.281 clat percentiles (usec): 00:09:59.281 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 186], 00:09:59.281 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:09:59.281 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 241], 00:09:59.281 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 351], 99.95th=[ 433], 00:09:59.281 | 99.99th=[ 1287] 00:09:59.281 write: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:09:59.281 slat (nsec): min=9463, max=41240, avg=10596.07, stdev=1230.92 00:09:59.281 clat (usec): min=115, max=302, avg=156.19, stdev=32.11 00:09:59.281 lat (usec): min=126, max=312, avg=166.78, stdev=32.29 00:09:59.281 clat percentiles (usec): 00:09:59.281 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:09:59.281 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 155], 00:09:59.281 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 200], 95.00th=[ 235], 00:09:59.281 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 302], 00:09:59.281 | 99.99th=[ 302] 00:09:59.281 bw ( KiB/s): min=12288, max=12288, per=54.17%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.281 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.281 lat (usec) : 250=97.30%, 500=2.68% 00:09:59.281 lat (msec) : 2=0.02% 00:09:59.281 cpu : usr=3.20%, sys=4.40%, ctx=5227, majf=0, minf=1 00:09:59.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 issued rwts: total=2560,2666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.281 job3: (groupid=0, jobs=1): err= 0: pid=188227: Sat Dec 14 22:19:19 2024 00:09:59.281 read: IOPS=535, BW=2142KiB/s (2194kB/s)(2228KiB/1040msec) 00:09:59.281 slat (nsec): min=5910, max=27349, avg=8210.94, stdev=2616.67 00:09:59.281 clat (usec): min=186, max=42142, avg=1514.45, stdev=7120.72 00:09:59.281 lat (usec): min=194, max=42150, avg=1522.66, stdev=7121.82 00:09:59.281 clat percentiles (usec): 00:09:59.281 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:59.281 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:59.281 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 269], 00:09:59.281 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.281 | 99.99th=[42206] 00:09:59.281 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:09:59.281 slat (nsec): min=9585, max=37752, avg=10925.37, stdev=1735.81 00:09:59.281 clat (usec): min=116, max=290, avg=171.17, stdev=24.28 00:09:59.281 lat (usec): min=126, max=328, avg=182.10, stdev=24.56 00:09:59.281 clat percentiles (usec): 00:09:59.281 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:09:59.281 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 180], 00:09:59.281 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:09:59.281 | 99.00th=[ 223], 99.50th=[ 225], 99.90th=[ 262], 99.95th=[ 289], 00:09:59.281 | 99.99th=[ 289] 00:09:59.281 bw ( KiB/s): min= 8192, max= 8192, per=36.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:59.281 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:59.281 lat (usec) : 250=96.39%, 500=2.47% 00:09:59.281 lat (msec) : 50=1.14% 00:09:59.281 cpu : usr=0.58%, sys=1.64%, ctx=1582, majf=0, minf=1 00:09:59.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.281 issued rwts: total=557,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.281 00:09:59.281 Run status group 0 (all jobs): 00:09:59.281 READ: bw=17.6MiB/s (18.4MB/s), 86.9KiB/s-9.99MiB/s (89.0kB/s-10.5MB/s), io=18.3MiB (19.1MB), run=1001-1040msec 00:09:59.281 WRITE: bw=22.2MiB/s (23.2MB/s), 2022KiB/s-10.4MiB/s (2070kB/s-10.9MB/s), io=23.0MiB (24.2MB), run=1001-1040msec 00:09:59.281 00:09:59.281 Disk stats (read/write): 00:09:59.281 nvme0n1: ios=1136/1536, merge=0/0, ticks=592/242, in_queue=834, util=86.67% 00:09:59.281 nvme0n2: ios=49/512, merge=0/0, ticks=1451/100, in_queue=1551, util=98.88% 00:09:59.281 nvme0n3: ios=2090/2534, merge=0/0, ticks=1282/365, in_queue=1647, util=99.58% 00:09:59.281 nvme0n4: ios=576/1024, merge=0/0, ticks=1308/175, in_queue=1483, util=99.06% 00:09:59.281 22:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.281 [global] 00:09:59.281 thread=1 00:09:59.281 invalidate=1 00:09:59.281 rw=write 00:09:59.281 time_based=1 00:09:59.281 runtime=1 00:09:59.281 ioengine=libaio 00:09:59.281 direct=1 00:09:59.281 bs=4096 00:09:59.281 iodepth=128 00:09:59.281 norandommap=0 00:09:59.281 numjobs=1 00:09:59.281 00:09:59.281 verify_dump=1 00:09:59.281 verify_backlog=512 00:09:59.281 verify_state_save=0 00:09:59.281 do_verify=1 00:09:59.281 verify=crc32c-intel 00:09:59.281 [job0] 00:09:59.281 filename=/dev/nvme0n1 00:09:59.281 [job1] 00:09:59.281 filename=/dev/nvme0n2 00:09:59.281 [job2] 00:09:59.281 filename=/dev/nvme0n3 00:09:59.281 [job3] 00:09:59.281 filename=/dev/nvme0n4 00:09:59.281 Could not set queue depth (nvme0n1) 00:09:59.281 Could not set queue depth (nvme0n2) 00:09:59.281 Could not set queue depth (nvme0n3) 00:09:59.281 Could not set queue depth (nvme0n4) 00:09:59.539 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.539 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.539 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.539 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.539 fio-3.35 00:09:59.539 Starting 4 threads 00:10:00.918 00:10:00.918 job0: (groupid=0, jobs=1): err= 0: pid=188594: Sat Dec 14 22:19:21 2024 00:10:00.918 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:00.918 slat (nsec): min=1015, max=15152k, avg=80333.81, stdev=581040.37 00:10:00.918 clat (usec): min=2032, max=29803, avg=10470.59, stdev=3318.25 00:10:00.918 lat (usec): min=2040, max=29809, avg=10550.92, stdev=3339.42 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 3228], 5.00th=[ 5145], 10.00th=[ 6849], 20.00th=[ 8586], 00:10:00.918 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:10:00.918 | 70.00th=[10814], 80.00th=[11863], 90.00th=[13960], 95.00th=[17171], 00:10:00.918 | 99.00th=[25560], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:10:00.918 | 99.99th=[29754] 00:10:00.918 write: IOPS=6553, BW=25.6MiB/s (26.8MB/s)(25.7MiB/1005msec); 0 zone resets 00:10:00.918 slat (nsec): min=1767, max=8812.3k, avg=69174.66, stdev=410315.74 00:10:00.918 clat (usec): min=366, max=23325, avg=9594.27, stdev=1916.73 00:10:00.918 lat (usec): min=463, max=23329, avg=9663.44, stdev=1959.09 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 3130], 5.00th=[ 5407], 10.00th=[ 7308], 20.00th=[ 8717], 00:10:00.918 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:10:00.918 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[11207], 00:10:00.918 | 99.00th=[14877], 99.50th=[15795], 99.90th=[20579], 99.95th=[20579], 00:10:00.918 | 99.99th=[23200] 00:10:00.918 bw ( KiB/s): min=25400, max=26272, per=35.42%, avg=25836.00, stdev=616.60, samples=2 00:10:00.918 iops : min= 6350, max= 6568, avg=6459.00, stdev=154.15, samples=2 00:10:00.918 lat (usec) : 500=0.01% 00:10:00.918 lat (msec) : 2=0.02%, 4=1.81%, 10=42.88%, 20=54.55%, 50=0.72% 00:10:00.918 cpu : usr=3.88%, sys=5.48%, ctx=610, majf=0, minf=1 00:10:00.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:00.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.918 issued rwts: total=6144,6586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.918 job1: (groupid=0, jobs=1): err= 0: pid=188599: Sat Dec 14 22:19:21 2024 00:10:00.918 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:10:00.918 slat (nsec): min=1280, max=16263k, avg=97188.58, stdev=725492.43 00:10:00.918 clat (usec): min=3558, max=40460, avg=12093.76, stdev=3849.59 00:10:00.918 lat (usec): min=3565, max=40482, avg=12190.95, stdev=3899.02 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 4817], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9765], 00:10:00.918 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:10:00.918 | 70.00th=[12256], 80.00th=[14484], 90.00th=[17433], 95.00th=[19530], 00:10:00.918 | 99.00th=[25297], 99.50th=[26346], 99.90th=[28181], 99.95th=[28181], 00:10:00.918 | 99.99th=[40633] 00:10:00.918 write: IOPS=5968, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1009msec); 0 zone resets 00:10:00.918 slat (usec): min=2, max=8728, avg=69.69, stdev=355.45 00:10:00.918 clat (usec): min=1671, max=26087, avg=9940.50, stdev=2467.66 00:10:00.918 lat (usec): min=1684, max=26090, avg=10010.19, stdev=2496.68 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 3589], 5.00th=[ 5211], 10.00th=[ 6652], 20.00th=[ 8848], 00:10:00.918 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:10:00.918 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11731], 95.00th=[12256], 00:10:00.918 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20579], 99.95th=[21103], 00:10:00.918 | 99.99th=[26084] 00:10:00.918 bw ( KiB/s): min=22584, max=24576, per=32.33%, avg=23580.00, stdev=1408.56, samples=2 00:10:00.918 iops : min= 5646, max= 6144, avg=5895.00, stdev=352.14, samples=2 00:10:00.918 lat (msec) : 2=0.03%, 4=0.94%, 10=31.23%, 20=65.57%, 50=2.22% 00:10:00.918 cpu : usr=4.27%, sys=6.25%, ctx=706, majf=0, minf=1 00:10:00.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:00.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.918 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.918 job2: (groupid=0, jobs=1): err= 0: pid=188608: Sat Dec 14 22:19:21 2024 00:10:00.918 read: IOPS=2411, BW=9646KiB/s (9878kB/s)(9704KiB/1006msec) 00:10:00.918 slat (nsec): min=1535, max=13128k, avg=220024.59, stdev=1247055.29 00:10:00.918 clat (msec): min=3, max=114, avg=19.63, stdev=16.85 00:10:00.918 lat (msec): min=5, max=114, avg=19.85, stdev=17.04 00:10:00.918 clat percentiles (msec): 00:10:00.918 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:10:00.918 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:10:00.918 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 42], 95.00th=[ 63], 00:10:00.918 | 99.00th=[ 92], 99.50th=[ 111], 99.90th=[ 114], 99.95th=[ 114], 00:10:00.918 | 99.99th=[ 114] 00:10:00.918 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:10:00.918 slat (usec): min=2, max=13000, avg=176.11, stdev=856.80 00:10:00.918 clat (usec): min=1511, max=114114, avg=31067.70, stdev=25962.99 00:10:00.918 lat (usec): min=1523, max=114122, avg=31243.81, stdev=26072.11 00:10:00.918 clat percentiles (msec): 00:10:00.918 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 13], 00:10:00.918 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 24], 00:10:00.918 | 70.00th=[ 25], 80.00th=[ 50], 90.00th=[ 64], 95.00th=[ 102], 00:10:00.918 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:10:00.918 | 99.99th=[ 114] 00:10:00.918 bw ( KiB/s): min= 9520, max=10960, per=14.04%, avg=10240.00, stdev=1018.23, samples=2 00:10:00.918 iops : min= 2380, max= 2740, avg=2560.00, stdev=254.56, samples=2 00:10:00.918 lat (msec) : 2=0.38%, 4=0.74%, 10=9.35%, 20=45.09%, 50=30.97% 00:10:00.918 lat (msec) : 100=10.41%, 250=3.07% 00:10:00.918 cpu : usr=1.79%, sys=2.89%, ctx=358, majf=0, minf=1 00:10:00.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:00.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.918 issued rwts: total=2426,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.918 job3: (groupid=0, jobs=1): err= 0: pid=188610: Sat Dec 14 22:19:21 2024 00:10:00.918 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:10:00.918 slat (nsec): min=1632, max=12186k, avg=123998.27, stdev=840833.29 00:10:00.918 clat (usec): min=6173, max=33882, avg=14728.00, stdev=4656.86 00:10:00.918 lat (usec): min=6179, max=34498, avg=14852.00, stdev=4731.37 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 7439], 5.00th=[10945], 10.00th=[11863], 20.00th=[12256], 00:10:00.918 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13960], 00:10:00.918 | 70.00th=[14615], 80.00th=[15401], 90.00th=[21627], 95.00th=[26608], 00:10:00.918 | 99.00th=[31065], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:10:00.918 | 99.99th=[33817] 00:10:00.918 write: IOPS=3247, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1012msec); 0 zone resets 00:10:00.918 slat (usec): min=2, max=25460, avg=182.38, stdev=1027.67 00:10:00.918 clat (usec): min=3345, max=62120, avg=24229.01, stdev=15300.11 00:10:00.918 lat (usec): min=3355, max=62134, avg=24411.39, stdev=15369.85 00:10:00.918 clat percentiles (usec): 00:10:00.918 | 1.00th=[ 4621], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11469], 00:10:00.918 | 30.00th=[12256], 40.00th=[19268], 50.00th=[21890], 60.00th=[22414], 00:10:00.918 | 70.00th=[24249], 80.00th=[34866], 90.00th=[52691], 95.00th=[57410], 00:10:00.918 | 99.00th=[61604], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:10:00.918 | 99.99th=[62129] 00:10:00.918 bw ( KiB/s): min=11008, max=14256, per=17.32%, avg=12632.00, stdev=2296.68, samples=2 00:10:00.918 iops : min= 2752, max= 3564, avg=3158.00, stdev=574.17, samples=2 00:10:00.918 lat (msec) : 4=0.38%, 10=4.25%, 20=59.67%, 50=28.89%, 100=6.81% 00:10:00.918 cpu : usr=3.86%, sys=2.97%, ctx=333, majf=0, minf=1 00:10:00.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:00.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.918 issued rwts: total=3072,3286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.918 00:10:00.918 Run status group 0 (all jobs): 00:10:00.918 READ: bw=66.7MiB/s (69.9MB/s), 9646KiB/s-23.9MiB/s (9878kB/s-25.0MB/s), io=67.5MiB (70.8MB), run=1005-1012msec 00:10:00.918 WRITE: bw=71.2MiB/s (74.7MB/s), 9.94MiB/s-25.6MiB/s (10.4MB/s-26.8MB/s), io=72.1MiB (75.6MB), run=1005-1012msec 00:10:00.918 00:10:00.918 Disk stats (read/write): 00:10:00.918 nvme0n1: ios=5170/5632, merge=0/0, ticks=41025/38681, in_queue=79706, util=86.67% 00:10:00.918 nvme0n2: ios=4658/5120, merge=0/0, ticks=55623/49058, in_queue=104681, util=94.82% 00:10:00.918 nvme0n3: ios=2086/2151, merge=0/0, ticks=39501/66194, in_queue=105695, util=98.02% 00:10:00.918 nvme0n4: ios=2619/2775, merge=0/0, ticks=37518/64664, in_queue=102182, util=100.00% 00:10:00.919 22:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:00.919 [global] 00:10:00.919 thread=1 00:10:00.919 invalidate=1 00:10:00.919 rw=randwrite 00:10:00.919 time_based=1 00:10:00.919 runtime=1 00:10:00.919 ioengine=libaio 00:10:00.919 direct=1 00:10:00.919 bs=4096 00:10:00.919 iodepth=128 00:10:00.919 norandommap=0 00:10:00.919 numjobs=1 00:10:00.919 00:10:00.919 verify_dump=1 00:10:00.919 verify_backlog=512 00:10:00.919 verify_state_save=0 00:10:00.919 do_verify=1 00:10:00.919 verify=crc32c-intel 00:10:00.919 [job0] 00:10:00.919 filename=/dev/nvme0n1 00:10:00.919 [job1] 00:10:00.919 filename=/dev/nvme0n2 00:10:00.919 [job2] 00:10:00.919 filename=/dev/nvme0n3 00:10:00.919 [job3] 00:10:00.919 filename=/dev/nvme0n4 00:10:00.919 Could not set queue depth (nvme0n1) 00:10:00.919 Could not set queue depth (nvme0n2) 00:10:00.919 Could not set queue depth (nvme0n3) 00:10:00.919 Could not set queue depth (nvme0n4) 00:10:01.176 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.176 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.176 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.176 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.176 fio-3.35 00:10:01.176 Starting 4 threads 00:10:02.551 00:10:02.551 job0: (groupid=0, jobs=1): err= 0: pid=189016: Sat Dec 14 22:19:23 2024 00:10:02.551 read: IOPS=4136, BW=16.2MiB/s (16.9MB/s)(16.3MiB/1011msec) 00:10:02.551 slat (nsec): min=1131, max=12432k, avg=117644.07, stdev=759952.61 00:10:02.551 clat (usec): min=5538, max=63502, avg=14015.28, stdev=7870.01 00:10:02.551 lat (usec): min=5551, max=63511, avg=14132.93, stdev=7923.53 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11207], 00:10:02.551 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12256], 00:10:02.551 | 70.00th=[12518], 80.00th=[13566], 90.00th=[19268], 95.00th=[21627], 00:10:02.551 | 99.00th=[57410], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:10:02.551 | 99.99th=[63701] 00:10:02.551 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:10:02.551 slat (nsec): min=1804, max=28531k, avg=105646.68, stdev=745130.51 00:10:02.551 clat (usec): min=1616, max=63505, avg=15074.51, stdev=7193.32 00:10:02.551 lat (usec): min=1623, max=63514, avg=15180.16, stdev=7226.68 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 4424], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[11600], 00:10:02.551 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[13829], 00:10:02.551 | 70.00th=[15664], 80.00th=[19006], 90.00th=[20841], 95.00th=[30016], 00:10:02.551 | 99.00th=[52167], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:10:02.551 | 99.99th=[63701] 00:10:02.551 bw ( KiB/s): min=16048, max=20480, per=25.19%, avg=18264.00, stdev=3133.90, samples=2 00:10:02.551 iops : min= 4012, max= 5120, avg=4566.00, stdev=783.47, samples=2 00:10:02.551 lat (msec) : 2=0.27%, 10=8.78%, 20=80.28%, 50=9.04%, 100=1.62% 00:10:02.551 cpu : usr=2.28%, sys=5.25%, ctx=413, majf=0, minf=1 00:10:02.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.551 issued rwts: total=4182,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.551 job1: (groupid=0, jobs=1): err= 0: pid=189033: Sat Dec 14 22:19:23 2024 00:10:02.551 read: IOPS=4560, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1007msec) 00:10:02.551 slat (nsec): min=1041, max=13688k, avg=110341.07, stdev=768882.85 00:10:02.551 clat (usec): min=3222, max=38406, avg=13669.41, stdev=4559.48 00:10:02.551 lat (usec): min=3230, max=38431, avg=13779.75, stdev=4610.45 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 5407], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10552], 00:10:02.551 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:10:02.551 | 70.00th=[14484], 80.00th=[16909], 90.00th=[19792], 95.00th=[22152], 00:10:02.551 | 99.00th=[27919], 99.50th=[28181], 99.90th=[32375], 99.95th=[32375], 00:10:02.551 | 99.99th=[38536] 00:10:02.551 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:02.551 slat (usec): min=2, max=9501, avg=99.11, stdev=502.67 00:10:02.551 clat (usec): min=469, max=47439, avg=14077.11, stdev=8052.97 00:10:02.551 lat (usec): min=478, max=47447, avg=14176.22, stdev=8102.37 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 2278], 5.00th=[ 5211], 10.00th=[ 7701], 20.00th=[ 9110], 00:10:02.551 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12125], 00:10:02.551 | 70.00th=[13566], 80.00th=[19530], 90.00th=[21627], 95.00th=[32375], 00:10:02.551 | 99.00th=[44827], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:10:02.551 | 99.99th=[47449] 00:10:02.551 bw ( KiB/s): min=13792, max=23072, per=25.42%, avg=18432.00, stdev=6561.95, samples=2 00:10:02.551 iops : min= 3448, max= 5768, avg=4608.00, stdev=1640.49, samples=2 00:10:02.551 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.07% 00:10:02.551 lat (msec) : 2=0.29%, 4=1.54%, 10=17.49%, 20=68.09%, 50=12.45% 00:10:02.551 cpu : usr=3.18%, sys=4.77%, ctx=553, majf=0, minf=1 00:10:02.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.551 issued rwts: total=4592,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.551 job2: (groupid=0, jobs=1): err= 0: pid=189050: Sat Dec 14 22:19:23 2024 00:10:02.551 read: IOPS=4833, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1002msec) 00:10:02.551 slat (nsec): min=1046, max=10821k, avg=93875.31, stdev=582610.41 00:10:02.551 clat (usec): min=273, max=24208, avg=12252.10, stdev=2635.98 00:10:02.551 lat (usec): min=280, max=24210, avg=12345.97, stdev=2677.46 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 4146], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10552], 00:10:02.551 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12125], 60.00th=[12649], 00:10:02.551 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14484], 95.00th=[16450], 00:10:02.551 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:10:02.551 | 99.99th=[24249] 00:10:02.551 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:02.551 slat (nsec): min=1726, max=18016k, avg=98141.46, stdev=658189.51 00:10:02.551 clat (usec): min=1176, max=50373, avg=13233.49, stdev=6118.14 00:10:02.551 lat (usec): min=1186, max=67103, avg=13331.63, stdev=6175.19 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 4555], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9634], 00:10:02.551 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12256], 60.00th=[13173], 00:10:02.551 | 70.00th=[13435], 80.00th=[13829], 90.00th=[16450], 95.00th=[21627], 00:10:02.551 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:10:02.551 | 99.99th=[50594] 00:10:02.551 bw ( KiB/s): min=18312, max=22648, per=28.24%, avg=20480.00, stdev=3066.02, samples=2 00:10:02.551 iops : min= 4578, max= 5662, avg=5120.00, stdev=766.50, samples=2 00:10:02.551 lat (usec) : 500=0.32%, 750=0.01% 00:10:02.551 lat (msec) : 2=0.15%, 4=0.32%, 10=17.99%, 20=77.52%, 50=3.06% 00:10:02.551 lat (msec) : 100=0.63% 00:10:02.551 cpu : usr=2.80%, sys=4.70%, ctx=551, majf=0, minf=1 00:10:02.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.551 issued rwts: total=4843,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.551 job3: (groupid=0, jobs=1): err= 0: pid=189058: Sat Dec 14 22:19:23 2024 00:10:02.551 read: IOPS=4102, BW=16.0MiB/s (16.8MB/s)(16.7MiB/1045msec) 00:10:02.551 slat (nsec): min=1122, max=17799k, avg=113708.71, stdev=709364.48 00:10:02.551 clat (usec): min=1558, max=63247, avg=15498.43, stdev=7955.35 00:10:02.551 lat (usec): min=1563, max=63249, avg=15612.13, stdev=7964.97 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 3982], 5.00th=[ 7832], 10.00th=[10028], 20.00th=[11994], 00:10:02.551 | 30.00th=[12387], 40.00th=[13435], 50.00th=[13960], 60.00th=[14353], 00:10:02.551 | 70.00th=[15008], 80.00th=[17171], 90.00th=[21627], 95.00th=[25560], 00:10:02.551 | 99.00th=[59507], 99.50th=[61080], 99.90th=[62653], 99.95th=[63177], 00:10:02.551 | 99.99th=[63177] 00:10:02.551 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:10:02.551 slat (nsec): min=1818, max=15994k, avg=106596.54, stdev=669619.80 00:10:02.551 clat (usec): min=1064, max=37766, avg=14346.81, stdev=5296.43 00:10:02.551 lat (usec): min=1073, max=37772, avg=14453.41, stdev=5316.92 00:10:02.551 clat percentiles (usec): 00:10:02.551 | 1.00th=[ 5145], 5.00th=[ 8291], 10.00th=[10421], 20.00th=[11338], 00:10:02.551 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13698], 60.00th=[14091], 00:10:02.551 | 70.00th=[14615], 80.00th=[15795], 90.00th=[19530], 95.00th=[25822], 00:10:02.551 | 99.00th=[36439], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:10:02.551 | 99.99th=[38011] 00:10:02.551 bw ( KiB/s): min=18008, max=18856, per=25.42%, avg=18432.00, stdev=599.63, samples=2 00:10:02.551 iops : min= 4502, max= 4714, avg=4608.00, stdev=149.91, samples=2 00:10:02.551 lat (msec) : 2=0.07%, 4=0.46%, 10=8.07%, 20=80.18%, 50=10.55% 00:10:02.551 lat (msec) : 100=0.67% 00:10:02.551 cpu : usr=3.16%, sys=3.54%, ctx=482, majf=0, minf=1 00:10:02.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.551 issued rwts: total=4287,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.551 00:10:02.551 Run status group 0 (all jobs): 00:10:02.551 READ: bw=66.9MiB/s (70.2MB/s), 16.0MiB/s-18.9MiB/s (16.8MB/s-19.8MB/s), io=69.9MiB (73.3MB), run=1002-1045msec 00:10:02.551 WRITE: bw=70.8MiB/s (74.3MB/s), 17.2MiB/s-20.0MiB/s (18.1MB/s-20.9MB/s), io=74.0MiB (77.6MB), run=1002-1045msec 00:10:02.551 00:10:02.551 Disk stats (read/write): 00:10:02.551 nvme0n1: ios=3619/4095, merge=0/0, ticks=18586/33727, in_queue=52313, util=97.39% 00:10:02.551 nvme0n2: ios=3981/4096, merge=0/0, ticks=41607/41144, in_queue=82751, util=90.55% 00:10:02.551 nvme0n3: ios=4059/4096, merge=0/0, ticks=21144/24142, in_queue=45286, util=88.82% 00:10:02.551 nvme0n4: ios=3641/3808, merge=0/0, ticks=17079/19524, in_queue=36603, util=92.77% 00:10:02.551 22:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:02.551 22:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189208 00:10:02.551 22:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:02.551 22:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:02.551 [global] 00:10:02.551 thread=1 00:10:02.551 invalidate=1 00:10:02.551 rw=read 00:10:02.551 time_based=1 00:10:02.552 runtime=10 00:10:02.552 ioengine=libaio 00:10:02.552 direct=1 00:10:02.552 bs=4096 00:10:02.552 iodepth=1 00:10:02.552 norandommap=1 00:10:02.552 numjobs=1 00:10:02.552 00:10:02.552 [job0] 00:10:02.552 filename=/dev/nvme0n1 00:10:02.552 [job1] 00:10:02.552 filename=/dev/nvme0n2 00:10:02.552 [job2] 00:10:02.552 filename=/dev/nvme0n3 00:10:02.552 [job3] 00:10:02.552 filename=/dev/nvme0n4 00:10:02.552 Could not set queue depth (nvme0n1) 00:10:02.552 Could not set queue depth (nvme0n2) 00:10:02.552 Could not set queue depth (nvme0n3) 00:10:02.552 Could not set queue depth (nvme0n4) 00:10:02.809 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.810 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.810 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.810 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.810 fio-3.35 00:10:02.810 Starting 4 threads 00:10:05.339 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:05.597 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:05.597 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:10:05.597 fio: pid=189555, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.855 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43155456, buflen=4096 00:10:05.855 fio: pid=189554, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.855 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.855 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:06.113 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.113 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:06.113 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44232704, buflen=4096 00:10:06.113 fio: pid=189521, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.113 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.113 22:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:06.113 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=647168, buflen=4096 00:10:06.113 fio: pid=189538, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.372 00:10:06.372 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189521: Sat Dec 14 22:19:27 2024 00:10:06.372 read: IOPS=3405, BW=13.3MiB/s (13.9MB/s)(42.2MiB/3171msec) 00:10:06.372 slat (usec): min=5, max=9467, avg= 7.77, stdev=91.50 00:10:06.372 clat (usec): min=149, max=44093, avg=282.71, stdev=1902.51 00:10:06.372 lat (usec): min=155, max=50705, avg=290.47, stdev=1925.82 00:10:06.372 clat percentiles (usec): 00:10:06.372 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:10:06.372 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:10:06.372 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:10:06.372 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[41157], 99.95th=[41157], 00:10:06.372 | 99.99th=[42730] 00:10:06.372 bw ( KiB/s): min= 174, max=19760, per=56.35%, avg=14387.67, stdev=7785.96, samples=6 00:10:06.372 iops : min= 43, max= 4940, avg=3596.83, stdev=1946.67, samples=6 00:10:06.372 lat (usec) : 250=98.97%, 500=0.78%, 1000=0.01% 00:10:06.372 lat (msec) : 4=0.01%, 50=0.22% 00:10:06.372 cpu : usr=0.85%, sys=3.00%, ctx=10804, majf=0, minf=1 00:10:06.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 issued rwts: total=10800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.372 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189538: Sat Dec 14 22:19:27 2024 00:10:06.372 read: IOPS=47, BW=187KiB/s (192kB/s)(632KiB/3378msec) 00:10:06.372 slat (nsec): min=6566, max=86098, avg=16900.48, stdev=10845.06 00:10:06.372 clat (usec): min=191, max=42449, avg=21191.22, stdev=20472.71 00:10:06.372 lat (usec): min=205, max=42456, avg=21208.08, stdev=20473.16 00:10:06.372 clat percentiles (usec): 00:10:06.372 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 233], 20.00th=[ 249], 00:10:06.372 | 30.00th=[ 260], 40.00th=[ 277], 50.00th=[40633], 60.00th=[40633], 00:10:06.372 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:06.372 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.372 | 99.99th=[42206] 00:10:06.372 bw ( KiB/s): min= 93, max= 280, per=0.76%, avg=195.50, stdev=69.28, samples=6 00:10:06.372 iops : min= 23, max= 70, avg=48.83, stdev=17.39, samples=6 00:10:06.372 lat (usec) : 250=20.75%, 500=27.04%, 750=0.63% 00:10:06.372 lat (msec) : 50=50.94% 00:10:06.372 cpu : usr=0.00%, sys=0.15%, ctx=162, majf=0, minf=2 00:10:06.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 issued rwts: total=159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.372 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189554: Sat Dec 14 22:19:27 2024 00:10:06.372 read: IOPS=3608, BW=14.1MiB/s (14.8MB/s)(41.2MiB/2920msec) 00:10:06.372 slat (nsec): min=5426, max=41854, avg=9039.32, stdev=1510.62 00:10:06.372 clat (usec): min=162, max=41996, avg=265.05, stdev=1325.50 00:10:06.372 lat (usec): min=168, max=42018, avg=274.09, stdev=1325.78 00:10:06.372 clat percentiles (usec): 00:10:06.372 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:10:06.372 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:06.372 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:10:06.372 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[40633], 99.95th=[41157], 00:10:06.372 | 99.99th=[42206] 00:10:06.372 bw ( KiB/s): min= 1640, max=17288, per=54.12%, avg=13817.60, stdev=6812.58, samples=5 00:10:06.372 iops : min= 410, max= 4322, avg=3454.40, stdev=1703.15, samples=5 00:10:06.372 lat (usec) : 250=95.07%, 500=4.80%, 750=0.02% 00:10:06.372 lat (msec) : 50=0.10% 00:10:06.372 cpu : usr=2.36%, sys=5.96%, ctx=10537, majf=0, minf=2 00:10:06.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 issued rwts: total=10537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.372 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189555: Sat Dec 14 22:19:27 2024 00:10:06.372 read: IOPS=24, BW=97.7KiB/s (100kB/s)(268KiB/2742msec) 00:10:06.372 slat (nsec): min=9988, max=35783, avg=22755.19, stdev=2269.26 00:10:06.372 clat (usec): min=441, max=42031, avg=40577.85, stdev=4993.61 00:10:06.372 lat (usec): min=477, max=42053, avg=40600.61, stdev=4991.97 00:10:06.372 clat percentiles (usec): 00:10:06.372 | 1.00th=[ 441], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:06.372 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.372 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:10:06.372 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.372 | 99.99th=[42206] 00:10:06.372 bw ( KiB/s): min= 96, max= 104, per=0.38%, avg=97.60, stdev= 3.58, samples=5 00:10:06.372 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:06.372 lat (usec) : 500=1.47% 00:10:06.372 lat (msec) : 50=97.06% 00:10:06.372 cpu : usr=0.07%, sys=0.00%, ctx=68, majf=0, minf=2 00:10:06.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.372 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.372 00:10:06.372 Run status group 0 (all jobs): 00:10:06.372 READ: bw=24.9MiB/s (26.1MB/s), 97.7KiB/s-14.1MiB/s (100kB/s-14.8MB/s), io=84.2MiB (88.3MB), run=2742-3378msec 00:10:06.372 00:10:06.372 Disk stats (read/write): 00:10:06.372 nvme0n1: ios=10795/0, merge=0/0, ticks=2914/0, in_queue=2914, util=95.32% 00:10:06.372 nvme0n2: ios=157/0, merge=0/0, ticks=3310/0, in_queue=3310, util=96.29% 00:10:06.372 nvme0n3: ios=10270/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.52% 00:10:06.372 nvme0n4: ios=64/0, merge=0/0, ticks=2598/0, in_queue=2598, util=96.41% 00:10:06.372 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.372 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:06.630 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.630 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:06.888 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.888 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189208 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:07.147 22:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:07.406 nvmf hotplug test: fio failed as expected 00:10:07.406 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.667 rmmod nvme_tcp 00:10:07.667 rmmod nvme_fabrics 00:10:07.667 rmmod nvme_keyring 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186536 ']' 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186536 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186536 ']' 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186536 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186536 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186536' 00:10:07.667 killing process with pid 186536 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186536 00:10:07.667 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186536 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.928 22:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.835 00:10:09.835 real 0m26.917s 00:10:09.835 user 1m47.663s 00:10:09.835 sys 0m8.485s 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.835 ************************************ 00:10:09.835 END TEST nvmf_fio_target 00:10:09.835 ************************************ 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.835 22:19:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.094 ************************************ 00:10:10.094 START TEST nvmf_bdevio 00:10:10.094 ************************************ 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.094 * Looking for test storage... 00:10:10.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.094 --rc genhtml_branch_coverage=1 00:10:10.094 --rc genhtml_function_coverage=1 00:10:10.094 --rc genhtml_legend=1 00:10:10.094 --rc geninfo_all_blocks=1 00:10:10.094 --rc geninfo_unexecuted_blocks=1 00:10:10.094 00:10:10.094 ' 00:10:10.094 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.095 --rc genhtml_branch_coverage=1 00:10:10.095 --rc genhtml_function_coverage=1 00:10:10.095 --rc genhtml_legend=1 00:10:10.095 --rc geninfo_all_blocks=1 00:10:10.095 --rc geninfo_unexecuted_blocks=1 00:10:10.095 00:10:10.095 ' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.095 --rc genhtml_branch_coverage=1 00:10:10.095 --rc genhtml_function_coverage=1 00:10:10.095 --rc genhtml_legend=1 00:10:10.095 --rc geninfo_all_blocks=1 00:10:10.095 --rc geninfo_unexecuted_blocks=1 00:10:10.095 00:10:10.095 ' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.095 --rc genhtml_branch_coverage=1 00:10:10.095 --rc genhtml_function_coverage=1 00:10:10.095 --rc genhtml_legend=1 00:10:10.095 --rc geninfo_all_blocks=1 00:10:10.095 --rc geninfo_unexecuted_blocks=1 00:10:10.095 00:10:10.095 ' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.095 22:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:16.665 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:16.665 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.665 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:16.666 Found net devices under 0000:af:00.0: cvl_0_0 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:16.666 Found net devices under 0000:af:00.1: cvl_0_1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:10:16.666 00:10:16.666 --- 10.0.0.2 ping statistics --- 00:10:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.666 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:10:16.666 00:10:16.666 --- 10.0.0.1 ping statistics --- 00:10:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.666 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193738 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193738 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193738 ']' 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.666 22:19:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.666 [2024-12-14 22:19:36.984424] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:16.666 [2024-12-14 22:19:36.984470] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.666 [2024-12-14 22:19:37.066901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.666 [2024-12-14 22:19:37.089625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.666 [2024-12-14 22:19:37.089661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.666 [2024-12-14 22:19:37.089671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.666 [2024-12-14 22:19:37.089677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.666 [2024-12-14 22:19:37.089682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.666 [2024-12-14 22:19:37.091108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.666 [2024-12-14 22:19:37.091214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.666 [2024-12-14 22:19:37.091323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.666 [2024-12-14 22:19:37.091323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.666 [2024-12-14 22:19:37.223315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.666 Malloc0 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.666 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 [2024-12-14 22:19:37.287131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.667 { 00:10:16.667 "params": { 00:10:16.667 "name": "Nvme$subsystem", 00:10:16.667 "trtype": "$TEST_TRANSPORT", 00:10:16.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.667 "adrfam": "ipv4", 00:10:16.667 "trsvcid": "$NVMF_PORT", 00:10:16.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.667 "hdgst": ${hdgst:-false}, 00:10:16.667 "ddgst": ${ddgst:-false} 00:10:16.667 }, 00:10:16.667 "method": "bdev_nvme_attach_controller" 00:10:16.667 } 00:10:16.667 EOF 00:10:16.667 )") 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:16.667 22:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.667 "params": { 00:10:16.667 "name": "Nvme1", 00:10:16.667 "trtype": "tcp", 00:10:16.667 "traddr": "10.0.0.2", 00:10:16.667 "adrfam": "ipv4", 00:10:16.667 "trsvcid": "4420", 00:10:16.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.667 "hdgst": false, 00:10:16.667 "ddgst": false 00:10:16.667 }, 00:10:16.667 "method": "bdev_nvme_attach_controller" 00:10:16.667 }' 00:10:16.667 [2024-12-14 22:19:37.335341] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:16.667 [2024-12-14 22:19:37.335381] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193943 ] 00:10:16.667 [2024-12-14 22:19:37.410680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.667 [2024-12-14 22:19:37.435712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.667 [2024-12-14 22:19:37.435822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.667 [2024-12-14 22:19:37.435823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.926 I/O targets: 00:10:16.926 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.926 00:10:16.926 00:10:16.926 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.926 http://cunit.sourceforge.net/ 00:10:16.926 00:10:16.926 00:10:16.926 Suite: bdevio tests on: Nvme1n1 00:10:16.926 Test: blockdev write read block ...passed 00:10:16.926 Test: blockdev write zeroes read block ...passed 00:10:16.926 Test: blockdev write zeroes read no split ...passed 00:10:16.926 Test: blockdev write zeroes read split ...passed 00:10:16.926 Test: blockdev write zeroes read split partial ...passed 00:10:16.926 Test: blockdev reset ...[2024-12-14 22:19:37.741625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:16.926 [2024-12-14 22:19:37.741682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1630 (9): Bad file descriptor 00:10:16.926 [2024-12-14 22:19:37.794826] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:16.926 passed 00:10:16.926 Test: blockdev write read 8 blocks ...passed 00:10:16.926 Test: blockdev write read size > 128k ...passed 00:10:16.926 Test: blockdev write read invalid size ...passed 00:10:17.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.185 Test: blockdev write read max offset ...passed 00:10:17.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.185 Test: blockdev writev readv 8 blocks ...passed 00:10:17.185 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.185 Test: blockdev writev readv block ...passed 00:10:17.185 Test: blockdev writev readv size > 128k ...passed 00:10:17.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.185 Test: blockdev comparev and writev ...[2024-12-14 22:19:38.047884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.047938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.185 [2024-12-14 22:19:38.048730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.185 [2024-12-14 22:19:38.048737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.444 passed 00:10:17.444 Test: blockdev nvme passthru rw ...passed 00:10:17.444 Test: blockdev nvme passthru vendor specific ...[2024-12-14 22:19:38.130388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.444 [2024-12-14 22:19:38.130404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.444 [2024-12-14 22:19:38.130508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.444 [2024-12-14 22:19:38.130518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.444 [2024-12-14 22:19:38.130619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.445 [2024-12-14 22:19:38.130629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.445 [2024-12-14 22:19:38.130726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.445 [2024-12-14 22:19:38.130735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.445 passed 00:10:17.445 Test: blockdev nvme admin passthru ...passed 00:10:17.445 Test: blockdev copy ...passed 00:10:17.445 00:10:17.445 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.445 suites 1 1 n/a 0 0 00:10:17.445 tests 23 23 23 0 0 00:10:17.445 asserts 152 152 152 0 n/a 00:10:17.445 00:10:17.445 Elapsed time = 1.119 seconds 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.445 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.704 rmmod nvme_tcp 00:10:17.704 rmmod nvme_fabrics 00:10:17.704 rmmod nvme_keyring 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193738 ']' 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193738 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193738 ']' 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193738 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193738 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193738' 00:10:17.704 killing process with pid 193738 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193738 00:10:17.704 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193738 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.964 22:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.870 00:10:19.870 real 0m9.951s 00:10:19.870 user 0m9.846s 00:10:19.870 sys 0m4.976s 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.870 ************************************ 00:10:19.870 END TEST nvmf_bdevio 00:10:19.870 ************************************ 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.870 00:10:19.870 real 4m33.891s 00:10:19.870 user 10m25.294s 00:10:19.870 sys 1m37.113s 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.870 22:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.870 ************************************ 00:10:19.870 END TEST nvmf_target_core 00:10:19.870 ************************************ 00:10:20.130 22:19:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.130 22:19:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.130 22:19:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.130 22:19:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 ************************************ 00:10:20.130 START TEST nvmf_target_extra 00:10:20.130 ************************************ 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.130 * Looking for test storage... 00:10:20.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.130 --rc genhtml_branch_coverage=1 00:10:20.130 --rc genhtml_function_coverage=1 00:10:20.130 --rc genhtml_legend=1 00:10:20.130 --rc geninfo_all_blocks=1 00:10:20.130 --rc geninfo_unexecuted_blocks=1 00:10:20.130 00:10:20.130 ' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.130 --rc genhtml_branch_coverage=1 00:10:20.130 --rc genhtml_function_coverage=1 00:10:20.130 --rc genhtml_legend=1 00:10:20.130 --rc geninfo_all_blocks=1 00:10:20.130 --rc geninfo_unexecuted_blocks=1 00:10:20.130 00:10:20.130 ' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.130 --rc genhtml_branch_coverage=1 00:10:20.130 --rc genhtml_function_coverage=1 00:10:20.130 --rc genhtml_legend=1 00:10:20.130 --rc geninfo_all_blocks=1 00:10:20.130 --rc geninfo_unexecuted_blocks=1 00:10:20.130 00:10:20.130 ' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.130 --rc genhtml_branch_coverage=1 00:10:20.130 --rc genhtml_function_coverage=1 00:10:20.130 --rc genhtml_legend=1 00:10:20.130 --rc geninfo_all_blocks=1 00:10:20.130 --rc geninfo_unexecuted_blocks=1 00:10:20.130 00:10:20.130 ' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.130 22:19:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.131 22:19:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.131 22:19:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:20.131 22:19:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.131 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:20.131 22:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.131 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.390 ************************************ 00:10:20.390 START TEST nvmf_example 00:10:20.390 ************************************ 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.390 * Looking for test storage... 00:10:20.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.390 --rc genhtml_branch_coverage=1 00:10:20.390 --rc genhtml_function_coverage=1 00:10:20.390 --rc genhtml_legend=1 00:10:20.390 --rc geninfo_all_blocks=1 00:10:20.390 --rc geninfo_unexecuted_blocks=1 00:10:20.390 00:10:20.390 ' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.390 --rc genhtml_branch_coverage=1 00:10:20.390 --rc genhtml_function_coverage=1 00:10:20.390 --rc genhtml_legend=1 00:10:20.390 --rc geninfo_all_blocks=1 00:10:20.390 --rc geninfo_unexecuted_blocks=1 00:10:20.390 00:10:20.390 ' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.390 --rc genhtml_branch_coverage=1 00:10:20.390 --rc genhtml_function_coverage=1 00:10:20.390 --rc genhtml_legend=1 00:10:20.390 --rc geninfo_all_blocks=1 00:10:20.390 --rc geninfo_unexecuted_blocks=1 00:10:20.390 00:10:20.390 ' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.390 --rc genhtml_branch_coverage=1 00:10:20.390 --rc genhtml_function_coverage=1 00:10:20.390 --rc genhtml_legend=1 00:10:20.390 --rc geninfo_all_blocks=1 00:10:20.390 --rc geninfo_unexecuted_blocks=1 00:10:20.390 00:10:20.390 ' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.390 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.391 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:26.956 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:26.956 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.956 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:26.957 Found net devices under 0000:af:00.0: cvl_0_0 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:26.957 Found net devices under 0000:af:00.1: cvl_0_1 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.957 22:19:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:10:26.957 00:10:26.957 --- 10.0.0.2 ping statistics --- 00:10:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.957 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:10:26.957 00:10:26.957 --- 10.0.0.1 ping statistics --- 00:10:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.957 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197721 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197721 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197721 ']' 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.957 22:19:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.215 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:27.474 22:19:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:37.441 Initializing NVMe Controllers 00:10:37.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.441 Initialization complete. Launching workers. 00:10:37.441 ======================================================== 00:10:37.441 Latency(us) 00:10:37.441 Device Information : IOPS MiB/s Average min max 00:10:37.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18236.18 71.24 3510.11 476.31 16297.09 00:10:37.441 ======================================================== 00:10:37.441 Total : 18236.18 71.24 3510.11 476.31 16297.09 00:10:37.441 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.700 rmmod nvme_tcp 00:10:37.700 rmmod nvme_fabrics 00:10:37.700 rmmod nvme_keyring 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197721 ']' 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197721 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197721 ']' 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197721 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197721 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197721' 00:10:37.700 killing process with pid 197721 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197721 00:10:37.700 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197721 00:10:37.960 nvmf threads initialize successfully 00:10:37.960 bdev subsystem init successfully 00:10:37.960 created a nvmf target service 00:10:37.960 create targets's poll groups done 00:10:37.960 all subsystems of target started 00:10:37.960 nvmf target is running 00:10:37.960 all subsystems of target stopped 00:10:37.960 destroy targets's poll groups done 00:10:37.960 destroyed the nvmf target service 00:10:37.960 bdev subsystem finish successfully 00:10:37.960 nvmf threads destroy successfully 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.960 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.866 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.866 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:39.866 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.866 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 00:10:40.125 real 0m19.729s 00:10:40.125 user 0m46.045s 00:10:40.125 sys 0m5.908s 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 ************************************ 00:10:40.125 END TEST nvmf_example 00:10:40.125 ************************************ 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.125 22:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 ************************************ 00:10:40.125 START TEST nvmf_filesystem 00:10:40.125 ************************************ 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.126 * Looking for test storage... 00:10:40.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:40.126 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:40.126 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.389 --rc genhtml_branch_coverage=1 00:10:40.389 --rc genhtml_function_coverage=1 00:10:40.389 --rc genhtml_legend=1 00:10:40.389 --rc geninfo_all_blocks=1 00:10:40.389 --rc geninfo_unexecuted_blocks=1 00:10:40.389 00:10:40.389 ' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.389 --rc genhtml_branch_coverage=1 00:10:40.389 --rc genhtml_function_coverage=1 00:10:40.389 --rc genhtml_legend=1 00:10:40.389 --rc geninfo_all_blocks=1 00:10:40.389 --rc geninfo_unexecuted_blocks=1 00:10:40.389 00:10:40.389 ' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.389 --rc genhtml_branch_coverage=1 00:10:40.389 --rc genhtml_function_coverage=1 00:10:40.389 --rc genhtml_legend=1 00:10:40.389 --rc geninfo_all_blocks=1 00:10:40.389 --rc geninfo_unexecuted_blocks=1 00:10:40.389 00:10:40.389 ' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.389 --rc genhtml_branch_coverage=1 00:10:40.389 --rc genhtml_function_coverage=1 00:10:40.389 --rc genhtml_legend=1 00:10:40.389 --rc geninfo_all_blocks=1 00:10:40.389 --rc geninfo_unexecuted_blocks=1 00:10:40.389 00:10:40.389 ' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:40.389 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:40.390 #define SPDK_CONFIG_H 00:10:40.390 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:40.390 #define SPDK_CONFIG_APPS 1 00:10:40.390 #define SPDK_CONFIG_ARCH native 00:10:40.390 #undef SPDK_CONFIG_ASAN 00:10:40.390 #undef SPDK_CONFIG_AVAHI 00:10:40.390 #undef SPDK_CONFIG_CET 00:10:40.390 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:40.390 #define SPDK_CONFIG_COVERAGE 1 00:10:40.390 #define SPDK_CONFIG_CROSS_PREFIX 00:10:40.390 #undef SPDK_CONFIG_CRYPTO 00:10:40.390 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:40.390 #undef SPDK_CONFIG_CUSTOMOCF 00:10:40.390 #undef SPDK_CONFIG_DAOS 00:10:40.390 #define SPDK_CONFIG_DAOS_DIR 00:10:40.390 #define SPDK_CONFIG_DEBUG 1 00:10:40.390 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:40.390 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.390 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:40.390 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.390 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:40.390 #undef SPDK_CONFIG_DPDK_UADK 00:10:40.390 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.390 #define SPDK_CONFIG_EXAMPLES 1 00:10:40.390 #undef SPDK_CONFIG_FC 00:10:40.390 #define SPDK_CONFIG_FC_PATH 00:10:40.390 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:40.390 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:40.390 #define SPDK_CONFIG_FSDEV 1 00:10:40.390 #undef SPDK_CONFIG_FUSE 00:10:40.390 #undef SPDK_CONFIG_FUZZER 00:10:40.390 #define SPDK_CONFIG_FUZZER_LIB 00:10:40.390 #undef SPDK_CONFIG_GOLANG 00:10:40.390 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:40.390 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:40.390 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:40.390 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:40.390 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:40.390 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:40.390 #undef SPDK_CONFIG_HAVE_LZ4 00:10:40.390 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:40.390 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:40.390 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:40.390 #define SPDK_CONFIG_IDXD 1 00:10:40.390 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:40.390 #undef SPDK_CONFIG_IPSEC_MB 00:10:40.390 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:40.390 #define SPDK_CONFIG_ISAL 1 00:10:40.390 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:40.390 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:40.390 #define SPDK_CONFIG_LIBDIR 00:10:40.390 #undef SPDK_CONFIG_LTO 00:10:40.390 #define SPDK_CONFIG_MAX_LCORES 128 00:10:40.390 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:40.390 #define SPDK_CONFIG_NVME_CUSE 1 00:10:40.390 #undef SPDK_CONFIG_OCF 00:10:40.390 #define SPDK_CONFIG_OCF_PATH 00:10:40.390 #define SPDK_CONFIG_OPENSSL_PATH 00:10:40.390 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:40.390 #define SPDK_CONFIG_PGO_DIR 00:10:40.390 #undef SPDK_CONFIG_PGO_USE 00:10:40.390 #define SPDK_CONFIG_PREFIX /usr/local 00:10:40.390 #undef SPDK_CONFIG_RAID5F 00:10:40.390 #undef SPDK_CONFIG_RBD 00:10:40.390 #define SPDK_CONFIG_RDMA 1 00:10:40.390 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:40.390 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:40.390 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:40.390 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:40.390 #define SPDK_CONFIG_SHARED 1 00:10:40.390 #undef SPDK_CONFIG_SMA 00:10:40.390 #define SPDK_CONFIG_TESTS 1 00:10:40.390 #undef SPDK_CONFIG_TSAN 00:10:40.390 #define SPDK_CONFIG_UBLK 1 00:10:40.390 #define SPDK_CONFIG_UBSAN 1 00:10:40.390 #undef SPDK_CONFIG_UNIT_TESTS 00:10:40.390 #undef SPDK_CONFIG_URING 00:10:40.390 #define SPDK_CONFIG_URING_PATH 00:10:40.390 #undef SPDK_CONFIG_URING_ZNS 00:10:40.390 #undef SPDK_CONFIG_USDT 00:10:40.390 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:40.390 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:40.390 #define SPDK_CONFIG_VFIO_USER 1 00:10:40.390 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:40.390 #define SPDK_CONFIG_VHOST 1 00:10:40.390 #define SPDK_CONFIG_VIRTIO 1 00:10:40.390 #undef SPDK_CONFIG_VTUNE 00:10:40.390 #define SPDK_CONFIG_VTUNE_DIR 00:10:40.390 #define SPDK_CONFIG_WERROR 1 00:10:40.390 #define SPDK_CONFIG_WPDK_DIR 00:10:40.390 #undef SPDK_CONFIG_XNVME 00:10:40.390 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.390 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:40.391 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:40.392 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 200065 ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 200065 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.LGjnfg 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LGjnfg/tests/target /tmp/spdk.LGjnfg 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.393 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88901562368 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552401408 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6650839040 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766167552 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776198656 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775887360 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:40.394 * Looking for test storage... 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88901562368 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8865431552 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.394 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.395 --rc genhtml_branch_coverage=1 00:10:40.395 --rc genhtml_function_coverage=1 00:10:40.395 --rc genhtml_legend=1 00:10:40.395 --rc geninfo_all_blocks=1 00:10:40.395 --rc geninfo_unexecuted_blocks=1 00:10:40.395 00:10:40.395 ' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.395 --rc genhtml_branch_coverage=1 00:10:40.395 --rc genhtml_function_coverage=1 00:10:40.395 --rc genhtml_legend=1 00:10:40.395 --rc geninfo_all_blocks=1 00:10:40.395 --rc geninfo_unexecuted_blocks=1 00:10:40.395 00:10:40.395 ' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.395 --rc genhtml_branch_coverage=1 00:10:40.395 --rc genhtml_function_coverage=1 00:10:40.395 --rc genhtml_legend=1 00:10:40.395 --rc geninfo_all_blocks=1 00:10:40.395 --rc geninfo_unexecuted_blocks=1 00:10:40.395 00:10:40.395 ' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.395 --rc genhtml_branch_coverage=1 00:10:40.395 --rc genhtml_function_coverage=1 00:10:40.395 --rc genhtml_legend=1 00:10:40.395 --rc geninfo_all_blocks=1 00:10:40.395 --rc geninfo_unexecuted_blocks=1 00:10:40.395 00:10:40.395 ' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.395 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.655 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.226 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:47.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:47.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:47.227 Found net devices under 0000:af:00.0: cvl_0_0 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:47.227 Found net devices under 0000:af:00.1: cvl_0_1 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.227 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:10:47.227 00:10:47.227 --- 10.0.0.2 ping statistics --- 00:10:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.227 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:47.227 00:10:47.227 --- 10.0.0.1 ping statistics --- 00:10:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.227 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 ************************************ 00:10:47.227 START TEST nvmf_filesystem_no_in_capsule 00:10:47.227 ************************************ 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=203062 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 203062 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 203062 ']' 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.227 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 [2024-12-14 22:20:07.344027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:47.227 [2024-12-14 22:20:07.344071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.228 [2024-12-14 22:20:07.419289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.228 [2024-12-14 22:20:07.441851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.228 [2024-12-14 22:20:07.441891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.228 [2024-12-14 22:20:07.441906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.228 [2024-12-14 22:20:07.441912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.228 [2024-12-14 22:20:07.441918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.228 [2024-12-14 22:20:07.443231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.228 [2024-12-14 22:20:07.443267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.228 [2024-12-14 22:20:07.443348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.228 [2024-12-14 22:20:07.443349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 [2024-12-14 22:20:07.583630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 [2024-12-14 22:20:07.740730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:47.228 { 00:10:47.228 "name": "Malloc1", 00:10:47.228 "aliases": [ 00:10:47.228 "b2d7132c-cafd-4393-96a2-3b5b6b8689cf" 00:10:47.228 ], 00:10:47.228 "product_name": "Malloc disk", 00:10:47.228 "block_size": 512, 00:10:47.228 "num_blocks": 1048576, 00:10:47.228 "uuid": "b2d7132c-cafd-4393-96a2-3b5b6b8689cf", 00:10:47.228 "assigned_rate_limits": { 00:10:47.228 "rw_ios_per_sec": 0, 00:10:47.228 "rw_mbytes_per_sec": 0, 00:10:47.228 "r_mbytes_per_sec": 0, 00:10:47.228 "w_mbytes_per_sec": 0 00:10:47.228 }, 00:10:47.228 "claimed": true, 00:10:47.228 "claim_type": "exclusive_write", 00:10:47.228 "zoned": false, 00:10:47.228 "supported_io_types": { 00:10:47.228 "read": true, 00:10:47.228 "write": true, 00:10:47.228 "unmap": true, 00:10:47.228 "flush": true, 00:10:47.228 "reset": true, 00:10:47.228 "nvme_admin": false, 00:10:47.228 "nvme_io": false, 00:10:47.228 "nvme_io_md": false, 00:10:47.228 "write_zeroes": true, 00:10:47.228 "zcopy": true, 00:10:47.228 "get_zone_info": false, 00:10:47.228 "zone_management": false, 00:10:47.228 "zone_append": false, 00:10:47.228 "compare": false, 00:10:47.228 "compare_and_write": false, 00:10:47.228 "abort": true, 00:10:47.228 "seek_hole": false, 00:10:47.228 "seek_data": false, 00:10:47.228 "copy": true, 00:10:47.228 "nvme_iov_md": false 00:10:47.228 }, 00:10:47.228 "memory_domains": [ 00:10:47.228 { 00:10:47.228 "dma_device_id": "system", 00:10:47.228 "dma_device_type": 1 00:10:47.228 }, 00:10:47.228 { 00:10:47.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.228 "dma_device_type": 2 00:10:47.228 } 00:10:47.228 ], 00:10:47.228 "driver_specific": {} 00:10:47.228 } 00:10:47.228 ]' 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.228 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.164 22:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.164 22:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.164 22:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.164 22:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.164 22:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.069 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.328 22:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.587 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.845 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.782 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.782 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.782 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.782 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.782 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 ************************************ 00:10:52.041 START TEST filesystem_ext4 00:10:52.041 ************************************ 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:52.041 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:52.041 mke2fs 1.47.0 (5-Feb-2023) 00:10:52.041 Discarding device blocks: 0/522240 done 00:10:52.041 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.041 Filesystem UUID: e2567d8c-0614-4182-937f-9354af8e518a 00:10:52.041 Superblock backups stored on blocks: 00:10:52.041 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.041 00:10:52.041 Allocating group tables: 0/64 done 00:10:52.041 Writing inode tables: 0/64 done 00:10:52.300 Creating journal (8192 blocks): done 00:10:54.193 Writing superblocks and filesystem accounting information: 0/64 done 00:10:54.193 00:10:54.193 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:54.193 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.755 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.755 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:00.755 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.755 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 203062 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.756 00:11:00.756 real 0m8.004s 00:11:00.756 user 0m0.032s 00:11:00.756 sys 0m0.114s 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:00.756 ************************************ 00:11:00.756 END TEST filesystem_ext4 00:11:00.756 ************************************ 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.756 ************************************ 00:11:00.756 START TEST filesystem_btrfs 00:11:00.756 ************************************ 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:00.756 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:00.756 btrfs-progs v6.8.1 00:11:00.756 See https://btrfs.readthedocs.io for more information. 00:11:00.756 00:11:00.756 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:00.756 NOTE: several default settings have changed in version 5.15, please make sure 00:11:00.756 this does not affect your deployments: 00:11:00.756 - DUP for metadata (-m dup) 00:11:00.756 - enabled no-holes (-O no-holes) 00:11:00.756 - enabled free-space-tree (-R free-space-tree) 00:11:00.756 00:11:00.756 Label: (null) 00:11:00.756 UUID: c0573ef1-4a08-4b48-92ad-18eb13fc9f53 00:11:00.756 Node size: 16384 00:11:00.756 Sector size: 4096 (CPU page size: 4096) 00:11:00.756 Filesystem size: 510.00MiB 00:11:00.756 Block group profiles: 00:11:00.756 Data: single 8.00MiB 00:11:00.756 Metadata: DUP 32.00MiB 00:11:00.756 System: DUP 8.00MiB 00:11:00.756 SSD detected: yes 00:11:00.756 Zoned device: no 00:11:00.756 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:00.756 Checksum: crc32c 00:11:00.756 Number of devices: 1 00:11:00.756 Devices: 00:11:00.756 ID SIZE PATH 00:11:00.756 1 510.00MiB /dev/nvme0n1p1 00:11:00.756 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 203062 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.756 00:11:00.756 real 0m0.588s 00:11:00.756 user 0m0.035s 00:11:00.756 sys 0m0.149s 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:00.756 ************************************ 00:11:00.756 END TEST filesystem_btrfs 00:11:00.756 ************************************ 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.756 ************************************ 00:11:00.756 START TEST filesystem_xfs 00:11:00.756 ************************************ 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:00.756 22:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:00.756 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:00.756 = sectsz=512 attr=2, projid32bit=1 00:11:00.756 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:00.756 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:00.756 data = bsize=4096 blocks=130560, imaxpct=25 00:11:00.756 = sunit=0 swidth=0 blks 00:11:00.756 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:00.756 log =internal log bsize=4096 blocks=16384, version=2 00:11:00.756 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:00.756 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:01.693 Discarding blocks...Done. 00:11:01.693 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:01.693 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 203062 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.983 00:11:04.983 real 0m3.774s 00:11:04.983 user 0m0.020s 00:11:04.983 sys 0m0.126s 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.983 ************************************ 00:11:04.983 END TEST filesystem_xfs 00:11:04.983 ************************************ 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 203062 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 203062 ']' 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 203062 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203062 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.983 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203062' 00:11:04.984 killing process with pid 203062 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 203062 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 203062 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:04.984 00:11:04.984 real 0m18.473s 00:11:04.984 user 1m12.804s 00:11:04.984 sys 0m1.568s 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.984 ************************************ 00:11:04.984 END TEST nvmf_filesystem_no_in_capsule 00:11:04.984 ************************************ 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.984 ************************************ 00:11:04.984 START TEST nvmf_filesystem_in_capsule 00:11:04.984 ************************************ 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=206420 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 206420 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 206420 ']' 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.984 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.243 [2024-12-14 22:20:25.891851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:05.244 [2024-12-14 22:20:25.891898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.244 [2024-12-14 22:20:25.967002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.244 [2024-12-14 22:20:25.987579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.244 [2024-12-14 22:20:25.987617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.244 [2024-12-14 22:20:25.987624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.244 [2024-12-14 22:20:25.987629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.244 [2024-12-14 22:20:25.987634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.244 [2024-12-14 22:20:25.989012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.244 [2024-12-14 22:20:25.989052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.244 [2024-12-14 22:20:25.989136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.244 [2024-12-14 22:20:25.989137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.244 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 [2024-12-14 22:20:26.129392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 [2024-12-14 22:20:26.284078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:05.504 { 00:11:05.504 "name": "Malloc1", 00:11:05.504 "aliases": [ 00:11:05.504 "8e6dd364-7379-4c92-ab53-1b5ade6f0651" 00:11:05.504 ], 00:11:05.504 "product_name": "Malloc disk", 00:11:05.504 "block_size": 512, 00:11:05.504 "num_blocks": 1048576, 00:11:05.504 "uuid": "8e6dd364-7379-4c92-ab53-1b5ade6f0651", 00:11:05.504 "assigned_rate_limits": { 00:11:05.504 "rw_ios_per_sec": 0, 00:11:05.504 "rw_mbytes_per_sec": 0, 00:11:05.504 "r_mbytes_per_sec": 0, 00:11:05.504 "w_mbytes_per_sec": 0 00:11:05.504 }, 00:11:05.504 "claimed": true, 00:11:05.504 "claim_type": "exclusive_write", 00:11:05.504 "zoned": false, 00:11:05.504 "supported_io_types": { 00:11:05.504 "read": true, 00:11:05.504 "write": true, 00:11:05.504 "unmap": true, 00:11:05.504 "flush": true, 00:11:05.504 "reset": true, 00:11:05.504 "nvme_admin": false, 00:11:05.504 "nvme_io": false, 00:11:05.504 "nvme_io_md": false, 00:11:05.504 "write_zeroes": true, 00:11:05.504 "zcopy": true, 00:11:05.504 "get_zone_info": false, 00:11:05.504 "zone_management": false, 00:11:05.504 "zone_append": false, 00:11:05.504 "compare": false, 00:11:05.504 "compare_and_write": false, 00:11:05.504 "abort": true, 00:11:05.504 "seek_hole": false, 00:11:05.504 "seek_data": false, 00:11:05.504 "copy": true, 00:11:05.504 "nvme_iov_md": false 00:11:05.504 }, 00:11:05.504 "memory_domains": [ 00:11:05.504 { 00:11:05.504 "dma_device_id": "system", 00:11:05.504 "dma_device_type": 1 00:11:05.504 }, 00:11:05.504 { 00:11:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.504 "dma_device_type": 2 00:11:05.504 } 00:11:05.504 ], 00:11:05.504 "driver_specific": {} 00:11:05.504 } 00:11:05.504 ]' 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:05.504 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:05.763 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:05.763 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:05.763 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:05.763 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:05.763 22:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.698 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.698 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.698 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.698 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.698 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:09.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:09.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.425 ************************************ 00:11:10.425 START TEST filesystem_in_capsule_ext4 00:11:10.425 ************************************ 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:10.425 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:10.425 mke2fs 1.47.0 (5-Feb-2023) 00:11:10.425 Discarding device blocks: 0/522240 done 00:11:10.425 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:10.425 Filesystem UUID: 9b08a452-7076-4262-af59-1c56bbcae0df 00:11:10.425 Superblock backups stored on blocks: 00:11:10.425 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:10.425 00:11:10.425 Allocating group tables: 0/64 done 00:11:10.425 Writing inode tables: 0/64 done 00:11:11.802 Creating journal (8192 blocks): done 00:11:11.802 Writing superblocks and filesystem accounting information: 0/64 done 00:11:11.802 00:11:11.802 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:11.802 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:17.070 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 206420 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.329 00:11:17.329 real 0m6.765s 00:11:17.329 user 0m0.036s 00:11:17.329 sys 0m0.064s 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.329 22:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:17.329 ************************************ 00:11:17.329 END TEST filesystem_in_capsule_ext4 00:11:17.329 ************************************ 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.329 ************************************ 00:11:17.329 START TEST filesystem_in_capsule_btrfs 00:11:17.329 ************************************ 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:17.329 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:17.588 btrfs-progs v6.8.1 00:11:17.588 See https://btrfs.readthedocs.io for more information. 00:11:17.588 00:11:17.588 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:17.588 NOTE: several default settings have changed in version 5.15, please make sure 00:11:17.588 this does not affect your deployments: 00:11:17.588 - DUP for metadata (-m dup) 00:11:17.588 - enabled no-holes (-O no-holes) 00:11:17.588 - enabled free-space-tree (-R free-space-tree) 00:11:17.588 00:11:17.588 Label: (null) 00:11:17.588 UUID: dd7045d4-7c1b-49ee-9e7f-94b78a74f1b7 00:11:17.588 Node size: 16384 00:11:17.588 Sector size: 4096 (CPU page size: 4096) 00:11:17.588 Filesystem size: 510.00MiB 00:11:17.588 Block group profiles: 00:11:17.588 Data: single 8.00MiB 00:11:17.588 Metadata: DUP 32.00MiB 00:11:17.588 System: DUP 8.00MiB 00:11:17.588 SSD detected: yes 00:11:17.588 Zoned device: no 00:11:17.588 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:17.588 Checksum: crc32c 00:11:17.588 Number of devices: 1 00:11:17.588 Devices: 00:11:17.588 ID SIZE PATH 00:11:17.588 1 510.00MiB /dev/nvme0n1p1 00:11:17.588 00:11:17.588 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:17.588 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 206420 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.847 00:11:17.847 real 0m0.582s 00:11:17.847 user 0m0.027s 00:11:17.847 sys 0m0.116s 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:17.847 ************************************ 00:11:17.847 END TEST filesystem_in_capsule_btrfs 00:11:17.847 ************************************ 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.847 ************************************ 00:11:17.847 START TEST filesystem_in_capsule_xfs 00:11:17.847 ************************************ 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:17.847 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:18.106 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:18.106 = sectsz=512 attr=2, projid32bit=1 00:11:18.106 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:18.106 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:18.106 data = bsize=4096 blocks=130560, imaxpct=25 00:11:18.106 = sunit=0 swidth=0 blks 00:11:18.106 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:18.106 log =internal log bsize=4096 blocks=16384, version=2 00:11:18.106 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:18.106 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:19.042 Discarding blocks...Done. 00:11:19.042 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:19.042 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 206420 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.576 00:11:21.576 real 0m3.544s 00:11:21.576 user 0m0.022s 00:11:21.576 sys 0m0.078s 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.576 ************************************ 00:11:21.576 END TEST filesystem_in_capsule_xfs 00:11:21.576 ************************************ 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 206420 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 206420 ']' 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 206420 00:11:21.576 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206420 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206420' 00:11:21.836 killing process with pid 206420 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 206420 00:11:21.836 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 206420 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:22.096 00:11:22.096 real 0m16.996s 00:11:22.096 user 1m6.987s 00:11:22.096 sys 0m1.350s 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.096 ************************************ 00:11:22.096 END TEST nvmf_filesystem_in_capsule 00:11:22.096 ************************************ 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.096 rmmod nvme_tcp 00:11:22.096 rmmod nvme_fabrics 00:11:22.096 rmmod nvme_keyring 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.096 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.634 00:11:24.634 real 0m44.176s 00:11:24.634 user 2m21.879s 00:11:24.634 sys 0m7.546s 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.634 ************************************ 00:11:24.634 END TEST nvmf_filesystem 00:11:24.634 ************************************ 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.634 ************************************ 00:11:24.634 START TEST nvmf_target_discovery 00:11:24.634 ************************************ 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:24.634 * Looking for test storage... 00:11:24.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.634 --rc genhtml_branch_coverage=1 00:11:24.634 --rc genhtml_function_coverage=1 00:11:24.634 --rc genhtml_legend=1 00:11:24.634 --rc geninfo_all_blocks=1 00:11:24.634 --rc geninfo_unexecuted_blocks=1 00:11:24.634 00:11:24.634 ' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.634 --rc genhtml_branch_coverage=1 00:11:24.634 --rc genhtml_function_coverage=1 00:11:24.634 --rc genhtml_legend=1 00:11:24.634 --rc geninfo_all_blocks=1 00:11:24.634 --rc geninfo_unexecuted_blocks=1 00:11:24.634 00:11:24.634 ' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.634 --rc genhtml_branch_coverage=1 00:11:24.634 --rc genhtml_function_coverage=1 00:11:24.634 --rc genhtml_legend=1 00:11:24.634 --rc geninfo_all_blocks=1 00:11:24.634 --rc geninfo_unexecuted_blocks=1 00:11:24.634 00:11:24.634 ' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.634 --rc genhtml_branch_coverage=1 00:11:24.634 --rc genhtml_function_coverage=1 00:11:24.634 --rc genhtml_legend=1 00:11:24.634 --rc geninfo_all_blocks=1 00:11:24.634 --rc geninfo_unexecuted_blocks=1 00:11:24.634 00:11:24.634 ' 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.634 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.635 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:31.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:31.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:31.209 Found net devices under 0000:af:00.0: cvl_0_0 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:31.209 Found net devices under 0000:af:00.1: cvl_0_1 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.209 22:20:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.209 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:11:31.210 00:11:31.210 --- 10.0.0.2 ping statistics --- 00:11:31.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.210 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:31.210 00:11:31.210 --- 10.0.0.1 ping statistics --- 00:11:31.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.210 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=212798 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 212798 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 212798 ']' 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 [2024-12-14 22:20:51.275563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:31.210 [2024-12-14 22:20:51.275605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.210 [2024-12-14 22:20:51.354813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.210 [2024-12-14 22:20:51.378189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.210 [2024-12-14 22:20:51.378226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.210 [2024-12-14 22:20:51.378233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.210 [2024-12-14 22:20:51.378242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.210 [2024-12-14 22:20:51.378247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.210 [2024-12-14 22:20:51.379768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.210 [2024-12-14 22:20:51.379807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.210 [2024-12-14 22:20:51.379955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.210 [2024-12-14 22:20:51.379955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 [2024-12-14 22:20:51.512771] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 Null1 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 [2024-12-14 22:20:51.582043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 Null2 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 Null3 00:11:31.210 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 Null4 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:31.211 00:11:31.211 Discovery Log Number of Records 6, Generation counter 6 00:11:31.211 =====Discovery Log Entry 0====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: current discovery subsystem 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4420 00:11:31.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: explicit discovery connections, duplicate discovery information 00:11:31.211 sectype: none 00:11:31.211 =====Discovery Log Entry 1====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: nvme subsystem 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4420 00:11:31.211 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: none 00:11:31.211 sectype: none 00:11:31.211 =====Discovery Log Entry 2====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: nvme subsystem 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4420 00:11:31.211 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: none 00:11:31.211 sectype: none 00:11:31.211 =====Discovery Log Entry 3====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: nvme subsystem 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4420 00:11:31.211 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: none 00:11:31.211 sectype: none 00:11:31.211 =====Discovery Log Entry 4====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: nvme subsystem 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4420 00:11:31.211 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: none 00:11:31.211 sectype: none 00:11:31.211 =====Discovery Log Entry 5====== 00:11:31.211 trtype: tcp 00:11:31.211 adrfam: ipv4 00:11:31.211 subtype: discovery subsystem referral 00:11:31.211 treq: not required 00:11:31.211 portid: 0 00:11:31.211 trsvcid: 4430 00:11:31.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:31.211 traddr: 10.0.0.2 00:11:31.211 eflags: none 00:11:31.211 sectype: none 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:31.211 Perform nvmf subsystem discovery via RPC 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.211 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.211 [ 00:11:31.211 { 00:11:31.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:31.211 "subtype": "Discovery", 00:11:31.211 "listen_addresses": [ 00:11:31.211 { 00:11:31.211 "trtype": "TCP", 00:11:31.211 "adrfam": "IPv4", 00:11:31.211 "traddr": "10.0.0.2", 00:11:31.211 "trsvcid": "4420" 00:11:31.211 } 00:11:31.211 ], 00:11:31.211 "allow_any_host": true, 00:11:31.211 "hosts": [] 00:11:31.211 }, 00:11:31.211 { 00:11:31.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.211 "subtype": "NVMe", 00:11:31.211 "listen_addresses": [ 00:11:31.211 { 00:11:31.211 "trtype": "TCP", 00:11:31.211 "adrfam": "IPv4", 00:11:31.211 "traddr": "10.0.0.2", 00:11:31.211 "trsvcid": "4420" 00:11:31.211 } 00:11:31.211 ], 00:11:31.211 "allow_any_host": true, 00:11:31.211 "hosts": [], 00:11:31.211 "serial_number": "SPDK00000000000001", 00:11:31.211 "model_number": "SPDK bdev Controller", 00:11:31.211 "max_namespaces": 32, 00:11:31.211 "min_cntlid": 1, 00:11:31.211 "max_cntlid": 65519, 00:11:31.211 "namespaces": [ 00:11:31.211 { 00:11:31.211 "nsid": 1, 00:11:31.211 "bdev_name": "Null1", 00:11:31.211 "name": "Null1", 00:11:31.211 "nguid": "1E0441A7BC2B423DB399D851A884CC30", 00:11:31.211 "uuid": "1e0441a7-bc2b-423d-b399-d851a884cc30" 00:11:31.211 } 00:11:31.211 ] 00:11:31.211 }, 00:11:31.211 { 00:11:31.211 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:31.211 "subtype": "NVMe", 00:11:31.211 "listen_addresses": [ 00:11:31.211 { 00:11:31.211 "trtype": "TCP", 00:11:31.211 "adrfam": "IPv4", 00:11:31.211 "traddr": "10.0.0.2", 00:11:31.211 "trsvcid": "4420" 00:11:31.211 } 00:11:31.211 ], 00:11:31.211 "allow_any_host": true, 00:11:31.211 "hosts": [], 00:11:31.211 "serial_number": "SPDK00000000000002", 00:11:31.211 "model_number": "SPDK bdev Controller", 00:11:31.211 "max_namespaces": 32, 00:11:31.211 "min_cntlid": 1, 00:11:31.211 "max_cntlid": 65519, 00:11:31.211 "namespaces": [ 00:11:31.211 { 00:11:31.211 "nsid": 1, 00:11:31.211 "bdev_name": "Null2", 00:11:31.211 "name": "Null2", 00:11:31.211 "nguid": "1D5B47389F034F22A9F332173DA8BD8C", 00:11:31.211 "uuid": "1d5b4738-9f03-4f22-a9f3-32173da8bd8c" 00:11:31.211 } 00:11:31.211 ] 00:11:31.211 }, 00:11:31.211 { 00:11:31.211 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:31.211 "subtype": "NVMe", 00:11:31.211 "listen_addresses": [ 00:11:31.211 { 00:11:31.211 "trtype": "TCP", 00:11:31.211 "adrfam": "IPv4", 00:11:31.211 "traddr": "10.0.0.2", 00:11:31.211 "trsvcid": "4420" 00:11:31.211 } 00:11:31.211 ], 00:11:31.211 "allow_any_host": true, 00:11:31.211 "hosts": [], 00:11:31.211 "serial_number": "SPDK00000000000003", 00:11:31.211 "model_number": "SPDK bdev Controller", 00:11:31.211 "max_namespaces": 32, 00:11:31.211 "min_cntlid": 1, 00:11:31.211 "max_cntlid": 65519, 00:11:31.211 "namespaces": [ 00:11:31.211 { 00:11:31.211 "nsid": 1, 00:11:31.211 "bdev_name": "Null3", 00:11:31.211 "name": "Null3", 00:11:31.211 "nguid": "563A78DFE5D5446BA6D0C0B7F69B026F", 00:11:31.211 "uuid": "563a78df-e5d5-446b-a6d0-c0b7f69b026f" 00:11:31.211 } 00:11:31.211 ] 00:11:31.212 }, 00:11:31.212 { 00:11:31.212 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:31.212 "subtype": "NVMe", 00:11:31.212 "listen_addresses": [ 00:11:31.212 { 00:11:31.212 "trtype": "TCP", 00:11:31.212 "adrfam": "IPv4", 00:11:31.212 "traddr": "10.0.0.2", 00:11:31.212 "trsvcid": "4420" 00:11:31.212 } 00:11:31.212 ], 00:11:31.212 "allow_any_host": true, 00:11:31.212 "hosts": [], 00:11:31.212 "serial_number": "SPDK00000000000004", 00:11:31.212 "model_number": "SPDK bdev Controller", 00:11:31.212 "max_namespaces": 32, 00:11:31.212 "min_cntlid": 1, 00:11:31.212 "max_cntlid": 65519, 00:11:31.212 "namespaces": [ 00:11:31.212 { 00:11:31.212 "nsid": 1, 00:11:31.212 "bdev_name": "Null4", 00:11:31.212 "name": "Null4", 00:11:31.212 "nguid": "C6217ABE10284D83ABF14E122C487A36", 00:11:31.212 "uuid": "c6217abe-1028-4d83-abf1-4e122c487a36" 00:11:31.212 } 00:11:31.212 ] 00:11:31.212 } 00:11:31.212 ] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.212 rmmod nvme_tcp 00:11:31.212 rmmod nvme_fabrics 00:11:31.212 rmmod nvme_keyring 00:11:31.212 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 212798 ']' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 212798 ']' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212798' 00:11:31.475 killing process with pid 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 212798 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.475 22:20:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.013 00:11:34.013 real 0m9.288s 00:11:34.013 user 0m5.610s 00:11:34.013 sys 0m4.759s 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.013 ************************************ 00:11:34.013 END TEST nvmf_target_discovery 00:11:34.013 ************************************ 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.013 ************************************ 00:11:34.013 START TEST nvmf_referrals 00:11:34.013 ************************************ 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:34.013 * Looking for test storage... 00:11:34.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.013 --rc genhtml_branch_coverage=1 00:11:34.013 --rc genhtml_function_coverage=1 00:11:34.013 --rc genhtml_legend=1 00:11:34.013 --rc geninfo_all_blocks=1 00:11:34.013 --rc geninfo_unexecuted_blocks=1 00:11:34.013 00:11:34.013 ' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.013 --rc genhtml_branch_coverage=1 00:11:34.013 --rc genhtml_function_coverage=1 00:11:34.013 --rc genhtml_legend=1 00:11:34.013 --rc geninfo_all_blocks=1 00:11:34.013 --rc geninfo_unexecuted_blocks=1 00:11:34.013 00:11:34.013 ' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.013 --rc genhtml_branch_coverage=1 00:11:34.013 --rc genhtml_function_coverage=1 00:11:34.013 --rc genhtml_legend=1 00:11:34.013 --rc geninfo_all_blocks=1 00:11:34.013 --rc geninfo_unexecuted_blocks=1 00:11:34.013 00:11:34.013 ' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.013 --rc genhtml_branch_coverage=1 00:11:34.013 --rc genhtml_function_coverage=1 00:11:34.013 --rc genhtml_legend=1 00:11:34.013 --rc geninfo_all_blocks=1 00:11:34.013 --rc geninfo_unexecuted_blocks=1 00:11:34.013 00:11:34.013 ' 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.013 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.014 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:40.586 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.587 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.587 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.587 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:40.587 00:11:40.587 --- 10.0.0.2 ping statistics --- 00:11:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.587 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:11:40.587 00:11:40.587 --- 10.0.0.1 ping statistics --- 00:11:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.587 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=216550 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 216550 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 216550 ']' 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.587 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.587 [2024-12-14 22:21:00.599027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:40.588 [2024-12-14 22:21:00.599076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.588 [2024-12-14 22:21:00.676133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.588 [2024-12-14 22:21:00.699191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.588 [2024-12-14 22:21:00.699228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.588 [2024-12-14 22:21:00.699235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.588 [2024-12-14 22:21:00.699241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.588 [2024-12-14 22:21:00.699246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.588 [2024-12-14 22:21:00.700702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.588 [2024-12-14 22:21:00.700740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.588 [2024-12-14 22:21:00.700846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.588 [2024-12-14 22:21:00.700848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 [2024-12-14 22:21:00.841503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 [2024-12-14 22:21:00.866073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.588 22:21:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.588 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:40.848 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.107 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.366 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.625 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.884 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.143 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.143 rmmod nvme_tcp 00:11:42.143 rmmod nvme_fabrics 00:11:42.143 rmmod nvme_keyring 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 216550 ']' 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 216550 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 216550 ']' 00:11:42.143 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 216550 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216550 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216550' 00:11:42.402 killing process with pid 216550 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 216550 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 216550 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.402 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.940 00:11:44.940 real 0m10.850s 00:11:44.940 user 0m12.595s 00:11:44.940 sys 0m5.169s 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.940 ************************************ 00:11:44.940 END TEST nvmf_referrals 00:11:44.940 ************************************ 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.940 ************************************ 00:11:44.940 START TEST nvmf_connect_disconnect 00:11:44.940 ************************************ 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.940 * Looking for test storage... 00:11:44.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.940 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.941 --rc genhtml_branch_coverage=1 00:11:44.941 --rc genhtml_function_coverage=1 00:11:44.941 --rc genhtml_legend=1 00:11:44.941 --rc geninfo_all_blocks=1 00:11:44.941 --rc geninfo_unexecuted_blocks=1 00:11:44.941 00:11:44.941 ' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.941 --rc genhtml_branch_coverage=1 00:11:44.941 --rc genhtml_function_coverage=1 00:11:44.941 --rc genhtml_legend=1 00:11:44.941 --rc geninfo_all_blocks=1 00:11:44.941 --rc geninfo_unexecuted_blocks=1 00:11:44.941 00:11:44.941 ' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.941 --rc genhtml_branch_coverage=1 00:11:44.941 --rc genhtml_function_coverage=1 00:11:44.941 --rc genhtml_legend=1 00:11:44.941 --rc geninfo_all_blocks=1 00:11:44.941 --rc geninfo_unexecuted_blocks=1 00:11:44.941 00:11:44.941 ' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.941 --rc genhtml_branch_coverage=1 00:11:44.941 --rc genhtml_function_coverage=1 00:11:44.941 --rc genhtml_legend=1 00:11:44.941 --rc geninfo_all_blocks=1 00:11:44.941 --rc geninfo_unexecuted_blocks=1 00:11:44.941 00:11:44.941 ' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.941 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.942 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:51.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:51.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:51.515 Found net devices under 0000:af:00.0: cvl_0_0 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:51.515 Found net devices under 0000:af:00.1: cvl_0_1 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.515 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:11:51.516 00:11:51.516 --- 10.0.0.2 ping statistics --- 00:11:51.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.516 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:51.516 00:11:51.516 --- 10.0.0.1 ping statistics --- 00:11:51.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.516 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=221025 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 221025 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 221025 ']' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 [2024-12-14 22:21:11.503410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:51.516 [2024-12-14 22:21:11.503457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.516 [2024-12-14 22:21:11.577643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.516 [2024-12-14 22:21:11.601128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.516 [2024-12-14 22:21:11.601167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.516 [2024-12-14 22:21:11.601185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.516 [2024-12-14 22:21:11.601191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.516 [2024-12-14 22:21:11.601212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.516 [2024-12-14 22:21:11.602673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.516 [2024-12-14 22:21:11.602710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.516 [2024-12-14 22:21:11.602814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.516 [2024-12-14 22:21:11.602816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 [2024-12-14 22:21:11.743013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.516 [2024-12-14 22:21:11.806989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:51.516 22:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:53.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.326 [2024-12-14 22:24:53.909809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccd90 is same with the state(6) to be set 00:15:33.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.200 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:42.200 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:42.200 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:42.200 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:42.200 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.459 rmmod nvme_tcp 00:15:42.459 rmmod nvme_fabrics 00:15:42.459 rmmod nvme_keyring 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 221025 ']' 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 221025 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 221025 ']' 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 221025 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221025 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.459 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221025' 00:15:42.459 killing process with pid 221025 00:15:42.460 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 221025 00:15:42.460 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 221025 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.719 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:44.626 00:15:44.626 real 4m0.068s 00:15:44.626 user 15m17.639s 00:15:44.626 sys 0m24.487s 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.626 ************************************ 00:15:44.626 END TEST nvmf_connect_disconnect 00:15:44.626 ************************************ 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.626 22:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.886 ************************************ 00:15:44.886 START TEST nvmf_multitarget 00:15:44.886 ************************************ 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:44.886 * Looking for test storage... 00:15:44.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.886 --rc genhtml_branch_coverage=1 00:15:44.886 --rc genhtml_function_coverage=1 00:15:44.886 --rc genhtml_legend=1 00:15:44.886 --rc geninfo_all_blocks=1 00:15:44.886 --rc geninfo_unexecuted_blocks=1 00:15:44.886 00:15:44.886 ' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.886 --rc genhtml_branch_coverage=1 00:15:44.886 --rc genhtml_function_coverage=1 00:15:44.886 --rc genhtml_legend=1 00:15:44.886 --rc geninfo_all_blocks=1 00:15:44.886 --rc geninfo_unexecuted_blocks=1 00:15:44.886 00:15:44.886 ' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.886 --rc genhtml_branch_coverage=1 00:15:44.886 --rc genhtml_function_coverage=1 00:15:44.886 --rc genhtml_legend=1 00:15:44.886 --rc geninfo_all_blocks=1 00:15:44.886 --rc geninfo_unexecuted_blocks=1 00:15:44.886 00:15:44.886 ' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.886 --rc genhtml_branch_coverage=1 00:15:44.886 --rc genhtml_function_coverage=1 00:15:44.886 --rc genhtml_legend=1 00:15:44.886 --rc geninfo_all_blocks=1 00:15:44.886 --rc geninfo_unexecuted_blocks=1 00:15:44.886 00:15:44.886 ' 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.886 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:44.887 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:51.457 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.457 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.457 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.457 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:51.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:51.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:51.458 Found net devices under 0000:af:00.0: cvl_0_0 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:51.458 Found net devices under 0000:af:00.1: cvl_0_1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:15:51.458 00:15:51.458 --- 10.0.0.2 ping statistics --- 00:15:51.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.458 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:15:51.458 00:15:51.458 --- 10.0.0.1 ping statistics --- 00:15:51.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.458 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:51.458 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=263873 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 263873 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 263873 ']' 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 [2024-12-14 22:25:11.660840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:51.459 [2024-12-14 22:25:11.660883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.459 [2024-12-14 22:25:11.735925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.459 [2024-12-14 22:25:11.759642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.459 [2024-12-14 22:25:11.759678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.459 [2024-12-14 22:25:11.759685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.459 [2024-12-14 22:25:11.759691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.459 [2024-12-14 22:25:11.759697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.459 [2024-12-14 22:25:11.761004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.459 [2024-12-14 22:25:11.761043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.459 [2024-12-14 22:25:11.761151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.459 [2024-12-14 22:25:11.761152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:51.459 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:51.459 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:51.459 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:51.459 "nvmf_tgt_1" 00:15:51.459 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:51.459 "nvmf_tgt_2" 00:15:51.459 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:51.459 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:51.718 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:51.718 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:51.718 true 00:15:51.718 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:51.718 true 00:15:51.718 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:51.718 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.977 rmmod nvme_tcp 00:15:51.977 rmmod nvme_fabrics 00:15:51.977 rmmod nvme_keyring 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 263873 ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 263873 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 263873 ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 263873 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263873 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263873' 00:15:51.977 killing process with pid 263873 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 263873 00:15:51.977 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 263873 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.237 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.183 00:15:54.183 real 0m9.488s 00:15:54.183 user 0m7.272s 00:15:54.183 sys 0m4.766s 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:54.183 ************************************ 00:15:54.183 END TEST nvmf_multitarget 00:15:54.183 ************************************ 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.183 22:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.443 ************************************ 00:15:54.443 START TEST nvmf_rpc 00:15:54.443 ************************************ 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:54.443 * Looking for test storage... 00:15:54.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:54.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.443 --rc genhtml_branch_coverage=1 00:15:54.443 --rc genhtml_function_coverage=1 00:15:54.443 --rc genhtml_legend=1 00:15:54.443 --rc geninfo_all_blocks=1 00:15:54.443 --rc geninfo_unexecuted_blocks=1 00:15:54.443 00:15:54.443 ' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:54.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.443 --rc genhtml_branch_coverage=1 00:15:54.443 --rc genhtml_function_coverage=1 00:15:54.443 --rc genhtml_legend=1 00:15:54.443 --rc geninfo_all_blocks=1 00:15:54.443 --rc geninfo_unexecuted_blocks=1 00:15:54.443 00:15:54.443 ' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:54.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.443 --rc genhtml_branch_coverage=1 00:15:54.443 --rc genhtml_function_coverage=1 00:15:54.443 --rc genhtml_legend=1 00:15:54.443 --rc geninfo_all_blocks=1 00:15:54.443 --rc geninfo_unexecuted_blocks=1 00:15:54.443 00:15:54.443 ' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:54.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.443 --rc genhtml_branch_coverage=1 00:15:54.443 --rc genhtml_function_coverage=1 00:15:54.443 --rc genhtml_legend=1 00:15:54.443 --rc geninfo_all_blocks=1 00:15:54.443 --rc geninfo_unexecuted_blocks=1 00:15:54.443 00:15:54.443 ' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.443 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.444 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:01.016 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:01.016 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:01.016 Found net devices under 0000:af:00.0: cvl_0_0 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:01.016 Found net devices under 0000:af:00.1: cvl_0_1 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:01.016 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.017 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:01.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:16:01.017 00:16:01.017 --- 10.0.0.2 ping statistics --- 00:16:01.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.017 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:01.017 00:16:01.017 --- 10.0.0.1 ping statistics --- 00:16:01.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.017 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=267593 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 267593 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 267593 ']' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.017 [2024-12-14 22:25:21.267879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:01.017 [2024-12-14 22:25:21.267927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.017 [2024-12-14 22:25:21.344708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.017 [2024-12-14 22:25:21.367712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.017 [2024-12-14 22:25:21.367747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.017 [2024-12-14 22:25:21.367754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.017 [2024-12-14 22:25:21.367760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.017 [2024-12-14 22:25:21.367769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.017 [2024-12-14 22:25:21.369055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.017 [2024-12-14 22:25:21.369093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.017 [2024-12-14 22:25:21.369199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.017 [2024-12-14 22:25:21.369200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:01.017 "tick_rate": 2100000000, 00:16:01.017 "poll_groups": [ 00:16:01.017 { 00:16:01.017 "name": "nvmf_tgt_poll_group_000", 00:16:01.017 "admin_qpairs": 0, 00:16:01.017 "io_qpairs": 0, 00:16:01.017 "current_admin_qpairs": 0, 00:16:01.017 "current_io_qpairs": 0, 00:16:01.017 "pending_bdev_io": 0, 00:16:01.017 "completed_nvme_io": 0, 00:16:01.017 "transports": [] 00:16:01.017 }, 00:16:01.017 { 00:16:01.017 "name": "nvmf_tgt_poll_group_001", 00:16:01.017 "admin_qpairs": 0, 00:16:01.017 "io_qpairs": 0, 00:16:01.017 "current_admin_qpairs": 0, 00:16:01.017 "current_io_qpairs": 0, 00:16:01.017 "pending_bdev_io": 0, 00:16:01.017 "completed_nvme_io": 0, 00:16:01.017 "transports": [] 00:16:01.017 }, 00:16:01.017 { 00:16:01.017 "name": "nvmf_tgt_poll_group_002", 00:16:01.017 "admin_qpairs": 0, 00:16:01.017 "io_qpairs": 0, 00:16:01.017 "current_admin_qpairs": 0, 00:16:01.017 "current_io_qpairs": 0, 00:16:01.017 "pending_bdev_io": 0, 00:16:01.017 "completed_nvme_io": 0, 00:16:01.017 "transports": [] 00:16:01.017 }, 00:16:01.017 { 00:16:01.017 "name": "nvmf_tgt_poll_group_003", 00:16:01.017 "admin_qpairs": 0, 00:16:01.017 "io_qpairs": 0, 00:16:01.017 "current_admin_qpairs": 0, 00:16:01.017 "current_io_qpairs": 0, 00:16:01.017 "pending_bdev_io": 0, 00:16:01.017 "completed_nvme_io": 0, 00:16:01.017 "transports": [] 00:16:01.017 } 00:16:01.017 ] 00:16:01.017 }' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.017 [2024-12-14 22:25:21.609352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.017 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:01.018 "tick_rate": 2100000000, 00:16:01.018 "poll_groups": [ 00:16:01.018 { 00:16:01.018 "name": "nvmf_tgt_poll_group_000", 00:16:01.018 "admin_qpairs": 0, 00:16:01.018 "io_qpairs": 0, 00:16:01.018 "current_admin_qpairs": 0, 00:16:01.018 "current_io_qpairs": 0, 00:16:01.018 "pending_bdev_io": 0, 00:16:01.018 "completed_nvme_io": 0, 00:16:01.018 "transports": [ 00:16:01.018 { 00:16:01.018 "trtype": "TCP" 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 }, 00:16:01.018 { 00:16:01.018 "name": "nvmf_tgt_poll_group_001", 00:16:01.018 "admin_qpairs": 0, 00:16:01.018 "io_qpairs": 0, 00:16:01.018 "current_admin_qpairs": 0, 00:16:01.018 "current_io_qpairs": 0, 00:16:01.018 "pending_bdev_io": 0, 00:16:01.018 "completed_nvme_io": 0, 00:16:01.018 "transports": [ 00:16:01.018 { 00:16:01.018 "trtype": "TCP" 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 }, 00:16:01.018 { 00:16:01.018 "name": "nvmf_tgt_poll_group_002", 00:16:01.018 "admin_qpairs": 0, 00:16:01.018 "io_qpairs": 0, 00:16:01.018 "current_admin_qpairs": 0, 00:16:01.018 "current_io_qpairs": 0, 00:16:01.018 "pending_bdev_io": 0, 00:16:01.018 "completed_nvme_io": 0, 00:16:01.018 "transports": [ 00:16:01.018 { 00:16:01.018 "trtype": "TCP" 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 }, 00:16:01.018 { 00:16:01.018 "name": "nvmf_tgt_poll_group_003", 00:16:01.018 "admin_qpairs": 0, 00:16:01.018 "io_qpairs": 0, 00:16:01.018 "current_admin_qpairs": 0, 00:16:01.018 "current_io_qpairs": 0, 00:16:01.018 "pending_bdev_io": 0, 00:16:01.018 "completed_nvme_io": 0, 00:16:01.018 "transports": [ 00:16:01.018 { 00:16:01.018 "trtype": "TCP" 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 }' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 Malloc1 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 [2024-12-14 22:25:21.795060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:01.018 [2024-12-14 22:25:21.823596] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:01.018 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:01.018 could not add new controller: failed to write to nvme-fabrics device 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.397 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:02.397 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:02.397 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.397 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:02.397 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:04.308 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.573 [2024-12-14 22:25:25.189336] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:04.573 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:04.573 could not add new controller: failed to write to nvme-fabrics device 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.573 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.574 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.950 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:05.950 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.950 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.950 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:05.950 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.854 [2024-12-14 22:25:28.562151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.854 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.233 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.233 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:09.233 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.233 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:09.233 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.138 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.138 [2024-12-14 22:25:31.943663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.139 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.516 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.516 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.516 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.516 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.516 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.420 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 [2024-12-14 22:25:35.233612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.421 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.798 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.798 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.798 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.798 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.798 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 [2024-12-14 22:25:38.508854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.080 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.080 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.080 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.080 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:19.080 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 [2024-12-14 22:25:41.766653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.985 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.986 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.364 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.364 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.364 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.364 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.364 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.270 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:24.271 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 [2024-12-14 22:25:45.131953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.271 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 [2024-12-14 22:25:45.183987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 [2024-12-14 22:25:45.232114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 [2024-12-14 22:25:45.280281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.531 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 [2024-12-14 22:25:45.332463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:24.532 "tick_rate": 2100000000, 00:16:24.532 "poll_groups": [ 00:16:24.532 { 00:16:24.532 "name": "nvmf_tgt_poll_group_000", 00:16:24.532 "admin_qpairs": 2, 00:16:24.532 "io_qpairs": 168, 00:16:24.532 "current_admin_qpairs": 0, 00:16:24.532 "current_io_qpairs": 0, 00:16:24.532 "pending_bdev_io": 0, 00:16:24.532 "completed_nvme_io": 244, 00:16:24.532 "transports": [ 00:16:24.532 { 00:16:24.532 "trtype": "TCP" 00:16:24.532 } 00:16:24.532 ] 00:16:24.532 }, 00:16:24.532 { 00:16:24.532 "name": "nvmf_tgt_poll_group_001", 00:16:24.532 "admin_qpairs": 2, 00:16:24.532 "io_qpairs": 168, 00:16:24.532 "current_admin_qpairs": 0, 00:16:24.532 "current_io_qpairs": 0, 00:16:24.532 "pending_bdev_io": 0, 00:16:24.532 "completed_nvme_io": 287, 00:16:24.532 "transports": [ 00:16:24.532 { 00:16:24.532 "trtype": "TCP" 00:16:24.532 } 00:16:24.532 ] 00:16:24.532 }, 00:16:24.532 { 00:16:24.532 "name": "nvmf_tgt_poll_group_002", 00:16:24.532 "admin_qpairs": 1, 00:16:24.532 "io_qpairs": 168, 00:16:24.532 "current_admin_qpairs": 0, 00:16:24.532 "current_io_qpairs": 0, 00:16:24.532 "pending_bdev_io": 0, 00:16:24.532 "completed_nvme_io": 231, 00:16:24.532 "transports": [ 00:16:24.532 { 00:16:24.532 "trtype": "TCP" 00:16:24.532 } 00:16:24.532 ] 00:16:24.532 }, 00:16:24.532 { 00:16:24.532 "name": "nvmf_tgt_poll_group_003", 00:16:24.532 "admin_qpairs": 2, 00:16:24.532 "io_qpairs": 168, 00:16:24.532 "current_admin_qpairs": 0, 00:16:24.532 "current_io_qpairs": 0, 00:16:24.532 "pending_bdev_io": 0, 00:16:24.532 "completed_nvme_io": 260, 00:16:24.532 "transports": [ 00:16:24.532 { 00:16:24.532 "trtype": "TCP" 00:16:24.532 } 00:16:24.532 ] 00:16:24.532 } 00:16:24.532 ] 00:16:24.532 }' 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:24.532 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.791 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.792 rmmod nvme_tcp 00:16:24.792 rmmod nvme_fabrics 00:16:24.792 rmmod nvme_keyring 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 267593 ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 267593 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 267593 ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 267593 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267593 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267593' 00:16:24.792 killing process with pid 267593 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 267593 00:16:24.792 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 267593 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.052 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.959 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:27.219 00:16:27.219 real 0m32.764s 00:16:27.219 user 1m38.897s 00:16:27.219 sys 0m6.491s 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.219 ************************************ 00:16:27.219 END TEST nvmf_rpc 00:16:27.219 ************************************ 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.219 ************************************ 00:16:27.219 START TEST nvmf_invalid 00:16:27.219 ************************************ 00:16:27.219 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:27.219 * Looking for test storage... 00:16:27.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.219 --rc genhtml_branch_coverage=1 00:16:27.219 --rc genhtml_function_coverage=1 00:16:27.219 --rc genhtml_legend=1 00:16:27.219 --rc geninfo_all_blocks=1 00:16:27.219 --rc geninfo_unexecuted_blocks=1 00:16:27.219 00:16:27.219 ' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.219 --rc genhtml_branch_coverage=1 00:16:27.219 --rc genhtml_function_coverage=1 00:16:27.219 --rc genhtml_legend=1 00:16:27.219 --rc geninfo_all_blocks=1 00:16:27.219 --rc geninfo_unexecuted_blocks=1 00:16:27.219 00:16:27.219 ' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.219 --rc genhtml_branch_coverage=1 00:16:27.219 --rc genhtml_function_coverage=1 00:16:27.219 --rc genhtml_legend=1 00:16:27.219 --rc geninfo_all_blocks=1 00:16:27.219 --rc geninfo_unexecuted_blocks=1 00:16:27.219 00:16:27.219 ' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.219 --rc genhtml_branch_coverage=1 00:16:27.219 --rc genhtml_function_coverage=1 00:16:27.219 --rc genhtml_legend=1 00:16:27.219 --rc geninfo_all_blocks=1 00:16:27.219 --rc geninfo_unexecuted_blocks=1 00:16:27.219 00:16:27.219 ' 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.219 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.479 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.480 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.056 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.057 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.057 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.057 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.057 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:16:34.057 00:16:34.057 --- 10.0.0.2 ping statistics --- 00:16:34.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.057 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:16:34.057 00:16:34.057 --- 10.0.0.1 ping statistics --- 00:16:34.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.057 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.057 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=275234 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 275234 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 275234 ']' 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.057 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.057 [2024-12-14 22:25:54.096780] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:34.057 [2024-12-14 22:25:54.096825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.057 [2024-12-14 22:25:54.177443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.057 [2024-12-14 22:25:54.200892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.057 [2024-12-14 22:25:54.200940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.057 [2024-12-14 22:25:54.200947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.057 [2024-12-14 22:25:54.200953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.057 [2024-12-14 22:25:54.200958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.057 [2024-12-14 22:25:54.202260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.057 [2024-12-14 22:25:54.202298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.057 [2024-12-14 22:25:54.202409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.057 [2024-12-14 22:25:54.202410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12416 00:16:34.058 [2024-12-14 22:25:54.499725] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode12416", 00:16:34.058 "tgt_name": "foobar", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32603, 00:16:34.058 "message": "Unable to find target foobar" 00:16:34.058 }' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode12416", 00:16:34.058 "tgt_name": "foobar", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32603, 00:16:34.058 "message": "Unable to find target foobar" 00:16:34.058 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29273 00:16:34.058 [2024-12-14 22:25:54.700432] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29273: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode29273", 00:16:34.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32602, 00:16:34.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:34.058 }' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode29273", 00:16:34.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32602, 00:16:34.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:34.058 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29045 00:16:34.058 [2024-12-14 22:25:54.889055] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29045: invalid model number 'SPDK_Controller' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode29045", 00:16:34.058 "model_number": "SPDK_Controller\u001f", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32602, 00:16:34.058 "message": "Invalid MN SPDK_Controller\u001f" 00:16:34.058 }' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:34.058 { 00:16:34.058 "nqn": "nqn.2016-06.io.spdk:cnode29045", 00:16:34.058 "model_number": "SPDK_Controller\u001f", 00:16:34.058 "method": "nvmf_create_subsystem", 00:16:34.058 "req_id": 1 00:16:34.058 } 00:16:34.058 Got JSON-RPC error response 00:16:34.058 response: 00:16:34.058 { 00:16:34.058 "code": -32602, 00:16:34.058 "message": "Invalid MN SPDK_Controller\u001f" 00:16:34.058 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.058 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.328 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo _eW~xIz9%eysyi%uxN2RS 00:16:34.329 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s _eW~xIz9%eysyi%uxN2RS nqn.2016-06.io.spdk:cnode15515 00:16:34.592 [2024-12-14 22:25:55.238235] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15515: invalid serial number '_eW~xIz9%eysyi%uxN2RS' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:34.592 { 00:16:34.592 "nqn": "nqn.2016-06.io.spdk:cnode15515", 00:16:34.592 "serial_number": "_eW~xIz9%eysyi%uxN2RS", 00:16:34.592 "method": "nvmf_create_subsystem", 00:16:34.592 "req_id": 1 00:16:34.592 } 00:16:34.592 Got JSON-RPC error response 00:16:34.592 response: 00:16:34.592 { 00:16:34.592 "code": -32602, 00:16:34.592 "message": "Invalid SN _eW~xIz9%eysyi%uxN2RS" 00:16:34.592 }' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:34.592 { 00:16:34.592 "nqn": "nqn.2016-06.io.spdk:cnode15515", 00:16:34.592 "serial_number": "_eW~xIz9%eysyi%uxN2RS", 00:16:34.592 "method": "nvmf_create_subsystem", 00:16:34.592 "req_id": 1 00:16:34.592 } 00:16:34.592 Got JSON-RPC error response 00:16:34.592 response: 00:16:34.592 { 00:16:34.592 "code": -32602, 00:16:34.592 "message": "Invalid SN _eW~xIz9%eysyi%uxN2RS" 00:16:34.592 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.592 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.851 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:16:34.852 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1U$hkPZ_k'\''\6,B.3wgBk8C`H`{yN'\''iCgL8NTbG'\''M' 00:16:34.852 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1U$hkPZ_k'\''\6,B.3wgBk8C`H`{yN'\''iCgL8NTbG'\''M' nqn.2016-06.io.spdk:cnode27468 00:16:34.852 [2024-12-14 22:25:55.699738] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27468: invalid model number '1U$hkPZ_k'\6,B.3wgBk8C`H`{yN'iCgL8NTbG'M' 00:16:34.852 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:34.852 { 00:16:34.852 "nqn": "nqn.2016-06.io.spdk:cnode27468", 00:16:34.852 "model_number": "1U$h\u007fkPZ_k'\''\\6,B.3wgBk8C`H`{yN'\''iCgL8NTbG'\''M", 00:16:34.852 "method": "nvmf_create_subsystem", 00:16:34.852 "req_id": 1 00:16:34.852 } 00:16:34.852 Got JSON-RPC error response 00:16:34.852 response: 00:16:34.852 { 00:16:34.852 "code": -32602, 00:16:34.852 "message": "Invalid MN 1U$h\u007fkPZ_k'\''\\6,B.3wgBk8C`H`{yN'\''iCgL8NTbG'\''M" 00:16:34.852 }' 00:16:34.852 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:34.852 { 00:16:34.852 "nqn": "nqn.2016-06.io.spdk:cnode27468", 00:16:34.852 "model_number": "1U$h\u007fkPZ_k'\\6,B.3wgBk8C`H`{yN'iCgL8NTbG'M", 00:16:34.852 "method": "nvmf_create_subsystem", 00:16:34.852 "req_id": 1 00:16:34.852 } 00:16:34.852 Got JSON-RPC error response 00:16:34.852 response: 00:16:34.852 { 00:16:34.852 "code": -32602, 00:16:34.852 "message": "Invalid MN 1U$h\u007fkPZ_k'\\6,B.3wgBk8C`H`{yN'iCgL8NTbG'M" 00:16:34.852 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:34.852 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:35.110 [2024-12-14 22:25:55.896458] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.110 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:35.369 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:35.369 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:35.369 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:35.369 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:35.369 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:35.628 [2024-12-14 22:25:56.303017] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:35.628 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:35.628 { 00:16:35.628 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:35.628 "listen_address": { 00:16:35.628 "trtype": "tcp", 00:16:35.628 "traddr": "", 00:16:35.628 "trsvcid": "4421" 00:16:35.628 }, 00:16:35.628 "method": "nvmf_subsystem_remove_listener", 00:16:35.628 "req_id": 1 00:16:35.628 } 00:16:35.628 Got JSON-RPC error response 00:16:35.628 response: 00:16:35.628 { 00:16:35.628 "code": -32602, 00:16:35.628 "message": "Invalid parameters" 00:16:35.628 }' 00:16:35.628 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:35.628 { 00:16:35.628 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:35.628 "listen_address": { 00:16:35.628 "trtype": "tcp", 00:16:35.628 "traddr": "", 00:16:35.628 "trsvcid": "4421" 00:16:35.628 }, 00:16:35.628 "method": "nvmf_subsystem_remove_listener", 00:16:35.628 "req_id": 1 00:16:35.628 } 00:16:35.628 Got JSON-RPC error response 00:16:35.628 response: 00:16:35.628 { 00:16:35.628 "code": -32602, 00:16:35.628 "message": "Invalid parameters" 00:16:35.628 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:35.628 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18485 -i 0 00:16:35.628 [2024-12-14 22:25:56.503687] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18485: invalid cntlid range [0-65519] 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:35.887 { 00:16:35.887 "nqn": "nqn.2016-06.io.spdk:cnode18485", 00:16:35.887 "min_cntlid": 0, 00:16:35.887 "method": "nvmf_create_subsystem", 00:16:35.887 "req_id": 1 00:16:35.887 } 00:16:35.887 Got JSON-RPC error response 00:16:35.887 response: 00:16:35.887 { 00:16:35.887 "code": -32602, 00:16:35.887 "message": "Invalid cntlid range [0-65519]" 00:16:35.887 }' 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:35.887 { 00:16:35.887 "nqn": "nqn.2016-06.io.spdk:cnode18485", 00:16:35.887 "min_cntlid": 0, 00:16:35.887 "method": "nvmf_create_subsystem", 00:16:35.887 "req_id": 1 00:16:35.887 } 00:16:35.887 Got JSON-RPC error response 00:16:35.887 response: 00:16:35.887 { 00:16:35.887 "code": -32602, 00:16:35.887 "message": "Invalid cntlid range [0-65519]" 00:16:35.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26893 -i 65520 00:16:35.887 [2024-12-14 22:25:56.716381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26893: invalid cntlid range [65520-65519] 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:35.887 { 00:16:35.887 "nqn": "nqn.2016-06.io.spdk:cnode26893", 00:16:35.887 "min_cntlid": 65520, 00:16:35.887 "method": "nvmf_create_subsystem", 00:16:35.887 "req_id": 1 00:16:35.887 } 00:16:35.887 Got JSON-RPC error response 00:16:35.887 response: 00:16:35.887 { 00:16:35.887 "code": -32602, 00:16:35.887 "message": "Invalid cntlid range [65520-65519]" 00:16:35.887 }' 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:35.887 { 00:16:35.887 "nqn": "nqn.2016-06.io.spdk:cnode26893", 00:16:35.887 "min_cntlid": 65520, 00:16:35.887 "method": "nvmf_create_subsystem", 00:16:35.887 "req_id": 1 00:16:35.887 } 00:16:35.887 Got JSON-RPC error response 00:16:35.887 response: 00:16:35.887 { 00:16:35.887 "code": -32602, 00:16:35.887 "message": "Invalid cntlid range [65520-65519]" 00:16:35.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:35.887 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8998 -I 0 00:16:36.146 [2024-12-14 22:25:56.913032] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8998: invalid cntlid range [1-0] 00:16:36.146 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:36.146 { 00:16:36.146 "nqn": "nqn.2016-06.io.spdk:cnode8998", 00:16:36.146 "max_cntlid": 0, 00:16:36.146 "method": "nvmf_create_subsystem", 00:16:36.146 "req_id": 1 00:16:36.146 } 00:16:36.146 Got JSON-RPC error response 00:16:36.146 response: 00:16:36.146 { 00:16:36.146 "code": -32602, 00:16:36.146 "message": "Invalid cntlid range [1-0]" 00:16:36.146 }' 00:16:36.146 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:36.146 { 00:16:36.146 "nqn": "nqn.2016-06.io.spdk:cnode8998", 00:16:36.146 "max_cntlid": 0, 00:16:36.146 "method": "nvmf_create_subsystem", 00:16:36.146 "req_id": 1 00:16:36.146 } 00:16:36.146 Got JSON-RPC error response 00:16:36.146 response: 00:16:36.146 { 00:16:36.146 "code": -32602, 00:16:36.146 "message": "Invalid cntlid range [1-0]" 00:16:36.146 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:36.146 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16607 -I 65520 00:16:36.405 [2024-12-14 22:25:57.113709] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16607: invalid cntlid range [1-65520] 00:16:36.405 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:36.405 { 00:16:36.405 "nqn": "nqn.2016-06.io.spdk:cnode16607", 00:16:36.405 "max_cntlid": 65520, 00:16:36.405 "method": "nvmf_create_subsystem", 00:16:36.405 "req_id": 1 00:16:36.405 } 00:16:36.405 Got JSON-RPC error response 00:16:36.405 response: 00:16:36.405 { 00:16:36.405 "code": -32602, 00:16:36.405 "message": "Invalid cntlid range [1-65520]" 00:16:36.405 }' 00:16:36.405 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:36.405 { 00:16:36.405 "nqn": "nqn.2016-06.io.spdk:cnode16607", 00:16:36.405 "max_cntlid": 65520, 00:16:36.405 "method": "nvmf_create_subsystem", 00:16:36.405 "req_id": 1 00:16:36.405 } 00:16:36.405 Got JSON-RPC error response 00:16:36.405 response: 00:16:36.405 { 00:16:36.405 "code": -32602, 00:16:36.405 "message": "Invalid cntlid range [1-65520]" 00:16:36.405 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:36.405 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode129 -i 6 -I 5 00:16:36.664 [2024-12-14 22:25:57.310389] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode129: invalid cntlid range [6-5] 00:16:36.664 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:36.664 { 00:16:36.665 "nqn": "nqn.2016-06.io.spdk:cnode129", 00:16:36.665 "min_cntlid": 6, 00:16:36.665 "max_cntlid": 5, 00:16:36.665 "method": "nvmf_create_subsystem", 00:16:36.665 "req_id": 1 00:16:36.665 } 00:16:36.665 Got JSON-RPC error response 00:16:36.665 response: 00:16:36.665 { 00:16:36.665 "code": -32602, 00:16:36.665 "message": "Invalid cntlid range [6-5]" 00:16:36.665 }' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:36.665 { 00:16:36.665 "nqn": "nqn.2016-06.io.spdk:cnode129", 00:16:36.665 "min_cntlid": 6, 00:16:36.665 "max_cntlid": 5, 00:16:36.665 "method": "nvmf_create_subsystem", 00:16:36.665 "req_id": 1 00:16:36.665 } 00:16:36.665 Got JSON-RPC error response 00:16:36.665 response: 00:16:36.665 { 00:16:36.665 "code": -32602, 00:16:36.665 "message": "Invalid cntlid range [6-5]" 00:16:36.665 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:36.665 { 00:16:36.665 "name": "foobar", 00:16:36.665 "method": "nvmf_delete_target", 00:16:36.665 "req_id": 1 00:16:36.665 } 00:16:36.665 Got JSON-RPC error response 00:16:36.665 response: 00:16:36.665 { 00:16:36.665 "code": -32602, 00:16:36.665 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:36.665 }' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:36.665 { 00:16:36.665 "name": "foobar", 00:16:36.665 "method": "nvmf_delete_target", 00:16:36.665 "req_id": 1 00:16:36.665 } 00:16:36.665 Got JSON-RPC error response 00:16:36.665 response: 00:16:36.665 { 00:16:36.665 "code": -32602, 00:16:36.665 "message": "The specified target doesn't exist, cannot delete it." 00:16:36.665 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.665 rmmod nvme_tcp 00:16:36.665 rmmod nvme_fabrics 00:16:36.665 rmmod nvme_keyring 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 275234 ']' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 275234 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 275234 ']' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 275234 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.665 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275234 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275234' 00:16:36.925 killing process with pid 275234 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 275234 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 275234 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.925 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:39.463 00:16:39.463 real 0m11.886s 00:16:39.463 user 0m18.327s 00:16:39.463 sys 0m5.286s 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:39.463 ************************************ 00:16:39.463 END TEST nvmf_invalid 00:16:39.463 ************************************ 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.463 ************************************ 00:16:39.463 START TEST nvmf_connect_stress 00:16:39.463 ************************************ 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:39.463 * Looking for test storage... 00:16:39.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:39.463 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.463 --rc genhtml_branch_coverage=1 00:16:39.463 --rc genhtml_function_coverage=1 00:16:39.463 --rc genhtml_legend=1 00:16:39.463 --rc geninfo_all_blocks=1 00:16:39.463 --rc geninfo_unexecuted_blocks=1 00:16:39.463 00:16:39.463 ' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.463 --rc genhtml_branch_coverage=1 00:16:39.463 --rc genhtml_function_coverage=1 00:16:39.463 --rc genhtml_legend=1 00:16:39.463 --rc geninfo_all_blocks=1 00:16:39.463 --rc geninfo_unexecuted_blocks=1 00:16:39.463 00:16:39.463 ' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.463 --rc genhtml_branch_coverage=1 00:16:39.463 --rc genhtml_function_coverage=1 00:16:39.463 --rc genhtml_legend=1 00:16:39.463 --rc geninfo_all_blocks=1 00:16:39.463 --rc geninfo_unexecuted_blocks=1 00:16:39.463 00:16:39.463 ' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.463 --rc genhtml_branch_coverage=1 00:16:39.463 --rc genhtml_function_coverage=1 00:16:39.463 --rc genhtml_legend=1 00:16:39.463 --rc geninfo_all_blocks=1 00:16:39.463 --rc geninfo_unexecuted_blocks=1 00:16:39.463 00:16:39.463 ' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.463 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:39.464 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:46.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:46.038 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:46.038 Found net devices under 0000:af:00.0: cvl_0_0 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:46.038 Found net devices under 0000:af:00.1: cvl_0_1 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:46.038 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.039 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:16:46.039 00:16:46.039 --- 10.0.0.2 ping statistics --- 00:16:46.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.039 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:16:46.039 00:16:46.039 --- 10.0.0.1 ping statistics --- 00:16:46.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.039 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279328 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279328 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279328 ']' 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 [2024-12-14 22:26:06.161181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:46.039 [2024-12-14 22:26:06.161225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.039 [2024-12-14 22:26:06.239304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.039 [2024-12-14 22:26:06.261446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.039 [2024-12-14 22:26:06.261480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.039 [2024-12-14 22:26:06.261487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.039 [2024-12-14 22:26:06.261493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.039 [2024-12-14 22:26:06.261498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.039 [2024-12-14 22:26:06.262737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.039 [2024-12-14 22:26:06.262849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.039 [2024-12-14 22:26:06.262850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 [2024-12-14 22:26:06.393476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 [2024-12-14 22:26:06.413693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 NULL1 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279476 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.039 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.040 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.299 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.299 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:46.299 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.299 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.299 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.866 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.866 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:46.866 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.866 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.866 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.124 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.124 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:47.124 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.124 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.124 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.383 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.383 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:47.383 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.383 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.383 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.641 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.641 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:47.641 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.641 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.641 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.208 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.208 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:48.209 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.209 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.209 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.468 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.468 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:48.468 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.468 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.468 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.726 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.726 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:48.726 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.726 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.726 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.985 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.985 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:48.985 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.985 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.985 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.243 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.243 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:49.243 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.243 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.243 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.810 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.810 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:49.810 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.810 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.810 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.069 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.069 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:50.069 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.069 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.069 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.327 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.327 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:50.327 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.327 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.327 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.586 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.586 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:50.586 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.586 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.586 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.845 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.845 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:50.845 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.845 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.845 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.412 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.412 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:51.412 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.412 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.412 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.671 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.671 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:51.671 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.671 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.671 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.929 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.929 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:51.929 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.929 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.929 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.188 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.188 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:52.188 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.188 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.188 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.755 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.755 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:52.755 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.755 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.755 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.014 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.014 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:53.014 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.014 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.014 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.272 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.272 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:53.272 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.272 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.272 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.530 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.530 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:53.530 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.530 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.530 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.789 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.789 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:53.789 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.789 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.789 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.355 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.356 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:54.356 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.356 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.356 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.613 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.613 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:54.613 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.613 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.613 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:54.871 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.871 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.130 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.130 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:55.130 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.130 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.130 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.698 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.698 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:55.698 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.698 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.698 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.698 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279476 00:16:55.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279476) - No such process 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279476 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:55.957 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.958 rmmod nvme_tcp 00:16:55.958 rmmod nvme_fabrics 00:16:55.958 rmmod nvme_keyring 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279328 ']' 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279328 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279328 ']' 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279328 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279328 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279328' 00:16:55.958 killing process with pid 279328 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279328 00:16:55.958 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279328 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.218 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:58.125 00:16:58.125 real 0m19.081s 00:16:58.125 user 0m41.210s 00:16:58.125 sys 0m6.733s 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.125 ************************************ 00:16:58.125 END TEST nvmf_connect_stress 00:16:58.125 ************************************ 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.125 22:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.385 ************************************ 00:16:58.385 START TEST nvmf_fused_ordering 00:16:58.385 ************************************ 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:58.385 * Looking for test storage... 00:16:58.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.385 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.386 --rc genhtml_branch_coverage=1 00:16:58.386 --rc genhtml_function_coverage=1 00:16:58.386 --rc genhtml_legend=1 00:16:58.386 --rc geninfo_all_blocks=1 00:16:58.386 --rc geninfo_unexecuted_blocks=1 00:16:58.386 00:16:58.386 ' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.386 --rc genhtml_branch_coverage=1 00:16:58.386 --rc genhtml_function_coverage=1 00:16:58.386 --rc genhtml_legend=1 00:16:58.386 --rc geninfo_all_blocks=1 00:16:58.386 --rc geninfo_unexecuted_blocks=1 00:16:58.386 00:16:58.386 ' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.386 --rc genhtml_branch_coverage=1 00:16:58.386 --rc genhtml_function_coverage=1 00:16:58.386 --rc genhtml_legend=1 00:16:58.386 --rc geninfo_all_blocks=1 00:16:58.386 --rc geninfo_unexecuted_blocks=1 00:16:58.386 00:16:58.386 ' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.386 --rc genhtml_branch_coverage=1 00:16:58.386 --rc genhtml_function_coverage=1 00:16:58.386 --rc genhtml_legend=1 00:16:58.386 --rc geninfo_all_blocks=1 00:16:58.386 --rc geninfo_unexecuted_blocks=1 00:16:58.386 00:16:58.386 ' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.386 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:04.958 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:04.958 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:04.958 Found net devices under 0000:af:00.0: cvl_0_0 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:04.958 Found net devices under 0000:af:00.1: cvl_0_1 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:04.958 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.959 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.959 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:04.959 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:04.959 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.959 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:04.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:17:04.959 00:17:04.959 --- 10.0.0.2 ping statistics --- 00:17:04.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.959 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:17:04.959 00:17:04.959 --- 10.0.0.1 ping statistics --- 00:17:04.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.959 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=284613 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 284613 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 284613 ']' 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 [2024-12-14 22:26:25.220709] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:04.959 [2024-12-14 22:26:25.220752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.959 [2024-12-14 22:26:25.295197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.959 [2024-12-14 22:26:25.316754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.959 [2024-12-14 22:26:25.316790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.959 [2024-12-14 22:26:25.316797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.959 [2024-12-14 22:26:25.316803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.959 [2024-12-14 22:26:25.316808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.959 [2024-12-14 22:26:25.317302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 [2024-12-14 22:26:25.447953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 [2024-12-14 22:26:25.468132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 NULL1 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.959 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:04.959 [2024-12-14 22:26:25.524009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:04.959 [2024-12-14 22:26:25.524040] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284635 ] 00:17:05.218 Attached to nqn.2016-06.io.spdk:cnode1 00:17:05.218 Namespace ID: 1 size: 1GB 00:17:05.218 fused_ordering(0) 00:17:05.218 fused_ordering(1) 00:17:05.218 fused_ordering(2) 00:17:05.218 fused_ordering(3) 00:17:05.218 fused_ordering(4) 00:17:05.218 fused_ordering(5) 00:17:05.218 fused_ordering(6) 00:17:05.218 fused_ordering(7) 00:17:05.218 fused_ordering(8) 00:17:05.218 fused_ordering(9) 00:17:05.218 fused_ordering(10) 00:17:05.218 fused_ordering(11) 00:17:05.218 fused_ordering(12) 00:17:05.218 fused_ordering(13) 00:17:05.218 fused_ordering(14) 00:17:05.218 fused_ordering(15) 00:17:05.218 fused_ordering(16) 00:17:05.218 fused_ordering(17) 00:17:05.218 fused_ordering(18) 00:17:05.218 fused_ordering(19) 00:17:05.218 fused_ordering(20) 00:17:05.218 fused_ordering(21) 00:17:05.218 fused_ordering(22) 00:17:05.218 fused_ordering(23) 00:17:05.218 fused_ordering(24) 00:17:05.218 fused_ordering(25) 00:17:05.218 fused_ordering(26) 00:17:05.218 fused_ordering(27) 00:17:05.218 fused_ordering(28) 00:17:05.218 fused_ordering(29) 00:17:05.218 fused_ordering(30) 00:17:05.218 fused_ordering(31) 00:17:05.218 fused_ordering(32) 00:17:05.218 fused_ordering(33) 00:17:05.218 fused_ordering(34) 00:17:05.218 fused_ordering(35) 00:17:05.218 fused_ordering(36) 00:17:05.218 fused_ordering(37) 00:17:05.218 fused_ordering(38) 00:17:05.218 fused_ordering(39) 00:17:05.218 fused_ordering(40) 00:17:05.218 fused_ordering(41) 00:17:05.218 fused_ordering(42) 00:17:05.218 fused_ordering(43) 00:17:05.218 fused_ordering(44) 00:17:05.218 fused_ordering(45) 00:17:05.218 fused_ordering(46) 00:17:05.218 fused_ordering(47) 00:17:05.218 fused_ordering(48) 00:17:05.218 fused_ordering(49) 00:17:05.218 fused_ordering(50) 00:17:05.218 fused_ordering(51) 00:17:05.218 fused_ordering(52) 00:17:05.218 fused_ordering(53) 00:17:05.218 fused_ordering(54) 00:17:05.218 fused_ordering(55) 00:17:05.218 fused_ordering(56) 00:17:05.218 fused_ordering(57) 00:17:05.218 fused_ordering(58) 00:17:05.218 fused_ordering(59) 00:17:05.218 fused_ordering(60) 00:17:05.218 fused_ordering(61) 00:17:05.218 fused_ordering(62) 00:17:05.218 fused_ordering(63) 00:17:05.218 fused_ordering(64) 00:17:05.218 fused_ordering(65) 00:17:05.218 fused_ordering(66) 00:17:05.218 fused_ordering(67) 00:17:05.218 fused_ordering(68) 00:17:05.218 fused_ordering(69) 00:17:05.218 fused_ordering(70) 00:17:05.218 fused_ordering(71) 00:17:05.218 fused_ordering(72) 00:17:05.218 fused_ordering(73) 00:17:05.218 fused_ordering(74) 00:17:05.218 fused_ordering(75) 00:17:05.218 fused_ordering(76) 00:17:05.218 fused_ordering(77) 00:17:05.218 fused_ordering(78) 00:17:05.218 fused_ordering(79) 00:17:05.218 fused_ordering(80) 00:17:05.218 fused_ordering(81) 00:17:05.218 fused_ordering(82) 00:17:05.218 fused_ordering(83) 00:17:05.218 fused_ordering(84) 00:17:05.218 fused_ordering(85) 00:17:05.218 fused_ordering(86) 00:17:05.218 fused_ordering(87) 00:17:05.218 fused_ordering(88) 00:17:05.218 fused_ordering(89) 00:17:05.218 fused_ordering(90) 00:17:05.218 fused_ordering(91) 00:17:05.218 fused_ordering(92) 00:17:05.218 fused_ordering(93) 00:17:05.218 fused_ordering(94) 00:17:05.218 fused_ordering(95) 00:17:05.218 fused_ordering(96) 00:17:05.218 fused_ordering(97) 00:17:05.218 fused_ordering(98) 00:17:05.218 fused_ordering(99) 00:17:05.218 fused_ordering(100) 00:17:05.218 fused_ordering(101) 00:17:05.218 fused_ordering(102) 00:17:05.218 fused_ordering(103) 00:17:05.218 fused_ordering(104) 00:17:05.218 fused_ordering(105) 00:17:05.218 fused_ordering(106) 00:17:05.218 fused_ordering(107) 00:17:05.218 fused_ordering(108) 00:17:05.218 fused_ordering(109) 00:17:05.218 fused_ordering(110) 00:17:05.218 fused_ordering(111) 00:17:05.218 fused_ordering(112) 00:17:05.218 fused_ordering(113) 00:17:05.218 fused_ordering(114) 00:17:05.218 fused_ordering(115) 00:17:05.218 fused_ordering(116) 00:17:05.218 fused_ordering(117) 00:17:05.218 fused_ordering(118) 00:17:05.218 fused_ordering(119) 00:17:05.218 fused_ordering(120) 00:17:05.218 fused_ordering(121) 00:17:05.218 fused_ordering(122) 00:17:05.218 fused_ordering(123) 00:17:05.218 fused_ordering(124) 00:17:05.218 fused_ordering(125) 00:17:05.218 fused_ordering(126) 00:17:05.218 fused_ordering(127) 00:17:05.218 fused_ordering(128) 00:17:05.218 fused_ordering(129) 00:17:05.218 fused_ordering(130) 00:17:05.218 fused_ordering(131) 00:17:05.218 fused_ordering(132) 00:17:05.218 fused_ordering(133) 00:17:05.218 fused_ordering(134) 00:17:05.218 fused_ordering(135) 00:17:05.218 fused_ordering(136) 00:17:05.218 fused_ordering(137) 00:17:05.218 fused_ordering(138) 00:17:05.218 fused_ordering(139) 00:17:05.218 fused_ordering(140) 00:17:05.218 fused_ordering(141) 00:17:05.218 fused_ordering(142) 00:17:05.218 fused_ordering(143) 00:17:05.218 fused_ordering(144) 00:17:05.218 fused_ordering(145) 00:17:05.218 fused_ordering(146) 00:17:05.218 fused_ordering(147) 00:17:05.218 fused_ordering(148) 00:17:05.218 fused_ordering(149) 00:17:05.218 fused_ordering(150) 00:17:05.218 fused_ordering(151) 00:17:05.218 fused_ordering(152) 00:17:05.218 fused_ordering(153) 00:17:05.218 fused_ordering(154) 00:17:05.218 fused_ordering(155) 00:17:05.218 fused_ordering(156) 00:17:05.218 fused_ordering(157) 00:17:05.218 fused_ordering(158) 00:17:05.218 fused_ordering(159) 00:17:05.218 fused_ordering(160) 00:17:05.218 fused_ordering(161) 00:17:05.218 fused_ordering(162) 00:17:05.218 fused_ordering(163) 00:17:05.218 fused_ordering(164) 00:17:05.218 fused_ordering(165) 00:17:05.218 fused_ordering(166) 00:17:05.218 fused_ordering(167) 00:17:05.218 fused_ordering(168) 00:17:05.218 fused_ordering(169) 00:17:05.218 fused_ordering(170) 00:17:05.218 fused_ordering(171) 00:17:05.218 fused_ordering(172) 00:17:05.218 fused_ordering(173) 00:17:05.218 fused_ordering(174) 00:17:05.218 fused_ordering(175) 00:17:05.218 fused_ordering(176) 00:17:05.218 fused_ordering(177) 00:17:05.218 fused_ordering(178) 00:17:05.218 fused_ordering(179) 00:17:05.218 fused_ordering(180) 00:17:05.218 fused_ordering(181) 00:17:05.218 fused_ordering(182) 00:17:05.218 fused_ordering(183) 00:17:05.218 fused_ordering(184) 00:17:05.218 fused_ordering(185) 00:17:05.218 fused_ordering(186) 00:17:05.218 fused_ordering(187) 00:17:05.218 fused_ordering(188) 00:17:05.218 fused_ordering(189) 00:17:05.218 fused_ordering(190) 00:17:05.218 fused_ordering(191) 00:17:05.218 fused_ordering(192) 00:17:05.218 fused_ordering(193) 00:17:05.218 fused_ordering(194) 00:17:05.218 fused_ordering(195) 00:17:05.218 fused_ordering(196) 00:17:05.218 fused_ordering(197) 00:17:05.218 fused_ordering(198) 00:17:05.218 fused_ordering(199) 00:17:05.218 fused_ordering(200) 00:17:05.218 fused_ordering(201) 00:17:05.218 fused_ordering(202) 00:17:05.218 fused_ordering(203) 00:17:05.218 fused_ordering(204) 00:17:05.218 fused_ordering(205) 00:17:05.476 fused_ordering(206) 00:17:05.476 fused_ordering(207) 00:17:05.476 fused_ordering(208) 00:17:05.476 fused_ordering(209) 00:17:05.476 fused_ordering(210) 00:17:05.476 fused_ordering(211) 00:17:05.476 fused_ordering(212) 00:17:05.476 fused_ordering(213) 00:17:05.476 fused_ordering(214) 00:17:05.476 fused_ordering(215) 00:17:05.476 fused_ordering(216) 00:17:05.476 fused_ordering(217) 00:17:05.476 fused_ordering(218) 00:17:05.476 fused_ordering(219) 00:17:05.476 fused_ordering(220) 00:17:05.476 fused_ordering(221) 00:17:05.476 fused_ordering(222) 00:17:05.476 fused_ordering(223) 00:17:05.476 fused_ordering(224) 00:17:05.476 fused_ordering(225) 00:17:05.476 fused_ordering(226) 00:17:05.476 fused_ordering(227) 00:17:05.476 fused_ordering(228) 00:17:05.476 fused_ordering(229) 00:17:05.476 fused_ordering(230) 00:17:05.476 fused_ordering(231) 00:17:05.476 fused_ordering(232) 00:17:05.476 fused_ordering(233) 00:17:05.476 fused_ordering(234) 00:17:05.476 fused_ordering(235) 00:17:05.476 fused_ordering(236) 00:17:05.476 fused_ordering(237) 00:17:05.476 fused_ordering(238) 00:17:05.476 fused_ordering(239) 00:17:05.476 fused_ordering(240) 00:17:05.476 fused_ordering(241) 00:17:05.476 fused_ordering(242) 00:17:05.476 fused_ordering(243) 00:17:05.476 fused_ordering(244) 00:17:05.476 fused_ordering(245) 00:17:05.476 fused_ordering(246) 00:17:05.476 fused_ordering(247) 00:17:05.476 fused_ordering(248) 00:17:05.476 fused_ordering(249) 00:17:05.476 fused_ordering(250) 00:17:05.476 fused_ordering(251) 00:17:05.476 fused_ordering(252) 00:17:05.476 fused_ordering(253) 00:17:05.476 fused_ordering(254) 00:17:05.476 fused_ordering(255) 00:17:05.476 fused_ordering(256) 00:17:05.476 fused_ordering(257) 00:17:05.476 fused_ordering(258) 00:17:05.476 fused_ordering(259) 00:17:05.476 fused_ordering(260) 00:17:05.476 fused_ordering(261) 00:17:05.476 fused_ordering(262) 00:17:05.476 fused_ordering(263) 00:17:05.476 fused_ordering(264) 00:17:05.476 fused_ordering(265) 00:17:05.476 fused_ordering(266) 00:17:05.476 fused_ordering(267) 00:17:05.476 fused_ordering(268) 00:17:05.476 fused_ordering(269) 00:17:05.477 fused_ordering(270) 00:17:05.477 fused_ordering(271) 00:17:05.477 fused_ordering(272) 00:17:05.477 fused_ordering(273) 00:17:05.477 fused_ordering(274) 00:17:05.477 fused_ordering(275) 00:17:05.477 fused_ordering(276) 00:17:05.477 fused_ordering(277) 00:17:05.477 fused_ordering(278) 00:17:05.477 fused_ordering(279) 00:17:05.477 fused_ordering(280) 00:17:05.477 fused_ordering(281) 00:17:05.477 fused_ordering(282) 00:17:05.477 fused_ordering(283) 00:17:05.477 fused_ordering(284) 00:17:05.477 fused_ordering(285) 00:17:05.477 fused_ordering(286) 00:17:05.477 fused_ordering(287) 00:17:05.477 fused_ordering(288) 00:17:05.477 fused_ordering(289) 00:17:05.477 fused_ordering(290) 00:17:05.477 fused_ordering(291) 00:17:05.477 fused_ordering(292) 00:17:05.477 fused_ordering(293) 00:17:05.477 fused_ordering(294) 00:17:05.477 fused_ordering(295) 00:17:05.477 fused_ordering(296) 00:17:05.477 fused_ordering(297) 00:17:05.477 fused_ordering(298) 00:17:05.477 fused_ordering(299) 00:17:05.477 fused_ordering(300) 00:17:05.477 fused_ordering(301) 00:17:05.477 fused_ordering(302) 00:17:05.477 fused_ordering(303) 00:17:05.477 fused_ordering(304) 00:17:05.477 fused_ordering(305) 00:17:05.477 fused_ordering(306) 00:17:05.477 fused_ordering(307) 00:17:05.477 fused_ordering(308) 00:17:05.477 fused_ordering(309) 00:17:05.477 fused_ordering(310) 00:17:05.477 fused_ordering(311) 00:17:05.477 fused_ordering(312) 00:17:05.477 fused_ordering(313) 00:17:05.477 fused_ordering(314) 00:17:05.477 fused_ordering(315) 00:17:05.477 fused_ordering(316) 00:17:05.477 fused_ordering(317) 00:17:05.477 fused_ordering(318) 00:17:05.477 fused_ordering(319) 00:17:05.477 fused_ordering(320) 00:17:05.477 fused_ordering(321) 00:17:05.477 fused_ordering(322) 00:17:05.477 fused_ordering(323) 00:17:05.477 fused_ordering(324) 00:17:05.477 fused_ordering(325) 00:17:05.477 fused_ordering(326) 00:17:05.477 fused_ordering(327) 00:17:05.477 fused_ordering(328) 00:17:05.477 fused_ordering(329) 00:17:05.477 fused_ordering(330) 00:17:05.477 fused_ordering(331) 00:17:05.477 fused_ordering(332) 00:17:05.477 fused_ordering(333) 00:17:05.477 fused_ordering(334) 00:17:05.477 fused_ordering(335) 00:17:05.477 fused_ordering(336) 00:17:05.477 fused_ordering(337) 00:17:05.477 fused_ordering(338) 00:17:05.477 fused_ordering(339) 00:17:05.477 fused_ordering(340) 00:17:05.477 fused_ordering(341) 00:17:05.477 fused_ordering(342) 00:17:05.477 fused_ordering(343) 00:17:05.477 fused_ordering(344) 00:17:05.477 fused_ordering(345) 00:17:05.477 fused_ordering(346) 00:17:05.477 fused_ordering(347) 00:17:05.477 fused_ordering(348) 00:17:05.477 fused_ordering(349) 00:17:05.477 fused_ordering(350) 00:17:05.477 fused_ordering(351) 00:17:05.477 fused_ordering(352) 00:17:05.477 fused_ordering(353) 00:17:05.477 fused_ordering(354) 00:17:05.477 fused_ordering(355) 00:17:05.477 fused_ordering(356) 00:17:05.477 fused_ordering(357) 00:17:05.477 fused_ordering(358) 00:17:05.477 fused_ordering(359) 00:17:05.477 fused_ordering(360) 00:17:05.477 fused_ordering(361) 00:17:05.477 fused_ordering(362) 00:17:05.477 fused_ordering(363) 00:17:05.477 fused_ordering(364) 00:17:05.477 fused_ordering(365) 00:17:05.477 fused_ordering(366) 00:17:05.477 fused_ordering(367) 00:17:05.477 fused_ordering(368) 00:17:05.477 fused_ordering(369) 00:17:05.477 fused_ordering(370) 00:17:05.477 fused_ordering(371) 00:17:05.477 fused_ordering(372) 00:17:05.477 fused_ordering(373) 00:17:05.477 fused_ordering(374) 00:17:05.477 fused_ordering(375) 00:17:05.477 fused_ordering(376) 00:17:05.477 fused_ordering(377) 00:17:05.477 fused_ordering(378) 00:17:05.477 fused_ordering(379) 00:17:05.477 fused_ordering(380) 00:17:05.477 fused_ordering(381) 00:17:05.477 fused_ordering(382) 00:17:05.477 fused_ordering(383) 00:17:05.477 fused_ordering(384) 00:17:05.477 fused_ordering(385) 00:17:05.477 fused_ordering(386) 00:17:05.477 fused_ordering(387) 00:17:05.477 fused_ordering(388) 00:17:05.477 fused_ordering(389) 00:17:05.477 fused_ordering(390) 00:17:05.477 fused_ordering(391) 00:17:05.477 fused_ordering(392) 00:17:05.477 fused_ordering(393) 00:17:05.477 fused_ordering(394) 00:17:05.477 fused_ordering(395) 00:17:05.477 fused_ordering(396) 00:17:05.477 fused_ordering(397) 00:17:05.477 fused_ordering(398) 00:17:05.477 fused_ordering(399) 00:17:05.477 fused_ordering(400) 00:17:05.477 fused_ordering(401) 00:17:05.477 fused_ordering(402) 00:17:05.477 fused_ordering(403) 00:17:05.477 fused_ordering(404) 00:17:05.477 fused_ordering(405) 00:17:05.477 fused_ordering(406) 00:17:05.477 fused_ordering(407) 00:17:05.477 fused_ordering(408) 00:17:05.477 fused_ordering(409) 00:17:05.477 fused_ordering(410) 00:17:05.735 fused_ordering(411) 00:17:05.735 fused_ordering(412) 00:17:05.735 fused_ordering(413) 00:17:05.735 fused_ordering(414) 00:17:05.735 fused_ordering(415) 00:17:05.735 fused_ordering(416) 00:17:05.735 fused_ordering(417) 00:17:05.735 fused_ordering(418) 00:17:05.735 fused_ordering(419) 00:17:05.735 fused_ordering(420) 00:17:05.735 fused_ordering(421) 00:17:05.735 fused_ordering(422) 00:17:05.735 fused_ordering(423) 00:17:05.735 fused_ordering(424) 00:17:05.735 fused_ordering(425) 00:17:05.735 fused_ordering(426) 00:17:05.735 fused_ordering(427) 00:17:05.735 fused_ordering(428) 00:17:05.735 fused_ordering(429) 00:17:05.735 fused_ordering(430) 00:17:05.735 fused_ordering(431) 00:17:05.735 fused_ordering(432) 00:17:05.735 fused_ordering(433) 00:17:05.735 fused_ordering(434) 00:17:05.735 fused_ordering(435) 00:17:05.735 fused_ordering(436) 00:17:05.735 fused_ordering(437) 00:17:05.735 fused_ordering(438) 00:17:05.735 fused_ordering(439) 00:17:05.735 fused_ordering(440) 00:17:05.735 fused_ordering(441) 00:17:05.735 fused_ordering(442) 00:17:05.735 fused_ordering(443) 00:17:05.735 fused_ordering(444) 00:17:05.735 fused_ordering(445) 00:17:05.735 fused_ordering(446) 00:17:05.735 fused_ordering(447) 00:17:05.735 fused_ordering(448) 00:17:05.735 fused_ordering(449) 00:17:05.735 fused_ordering(450) 00:17:05.735 fused_ordering(451) 00:17:05.735 fused_ordering(452) 00:17:05.735 fused_ordering(453) 00:17:05.735 fused_ordering(454) 00:17:05.736 fused_ordering(455) 00:17:05.736 fused_ordering(456) 00:17:05.736 fused_ordering(457) 00:17:05.736 fused_ordering(458) 00:17:05.736 fused_ordering(459) 00:17:05.736 fused_ordering(460) 00:17:05.736 fused_ordering(461) 00:17:05.736 fused_ordering(462) 00:17:05.736 fused_ordering(463) 00:17:05.736 fused_ordering(464) 00:17:05.736 fused_ordering(465) 00:17:05.736 fused_ordering(466) 00:17:05.736 fused_ordering(467) 00:17:05.736 fused_ordering(468) 00:17:05.736 fused_ordering(469) 00:17:05.736 fused_ordering(470) 00:17:05.736 fused_ordering(471) 00:17:05.736 fused_ordering(472) 00:17:05.736 fused_ordering(473) 00:17:05.736 fused_ordering(474) 00:17:05.736 fused_ordering(475) 00:17:05.736 fused_ordering(476) 00:17:05.736 fused_ordering(477) 00:17:05.736 fused_ordering(478) 00:17:05.736 fused_ordering(479) 00:17:05.736 fused_ordering(480) 00:17:05.736 fused_ordering(481) 00:17:05.736 fused_ordering(482) 00:17:05.736 fused_ordering(483) 00:17:05.736 fused_ordering(484) 00:17:05.736 fused_ordering(485) 00:17:05.736 fused_ordering(486) 00:17:05.736 fused_ordering(487) 00:17:05.736 fused_ordering(488) 00:17:05.736 fused_ordering(489) 00:17:05.736 fused_ordering(490) 00:17:05.736 fused_ordering(491) 00:17:05.736 fused_ordering(492) 00:17:05.736 fused_ordering(493) 00:17:05.736 fused_ordering(494) 00:17:05.736 fused_ordering(495) 00:17:05.736 fused_ordering(496) 00:17:05.736 fused_ordering(497) 00:17:05.736 fused_ordering(498) 00:17:05.736 fused_ordering(499) 00:17:05.736 fused_ordering(500) 00:17:05.736 fused_ordering(501) 00:17:05.736 fused_ordering(502) 00:17:05.736 fused_ordering(503) 00:17:05.736 fused_ordering(504) 00:17:05.736 fused_ordering(505) 00:17:05.736 fused_ordering(506) 00:17:05.736 fused_ordering(507) 00:17:05.736 fused_ordering(508) 00:17:05.736 fused_ordering(509) 00:17:05.736 fused_ordering(510) 00:17:05.736 fused_ordering(511) 00:17:05.736 fused_ordering(512) 00:17:05.736 fused_ordering(513) 00:17:05.736 fused_ordering(514) 00:17:05.736 fused_ordering(515) 00:17:05.736 fused_ordering(516) 00:17:05.736 fused_ordering(517) 00:17:05.736 fused_ordering(518) 00:17:05.736 fused_ordering(519) 00:17:05.736 fused_ordering(520) 00:17:05.736 fused_ordering(521) 00:17:05.736 fused_ordering(522) 00:17:05.736 fused_ordering(523) 00:17:05.736 fused_ordering(524) 00:17:05.736 fused_ordering(525) 00:17:05.736 fused_ordering(526) 00:17:05.736 fused_ordering(527) 00:17:05.736 fused_ordering(528) 00:17:05.736 fused_ordering(529) 00:17:05.736 fused_ordering(530) 00:17:05.736 fused_ordering(531) 00:17:05.736 fused_ordering(532) 00:17:05.736 fused_ordering(533) 00:17:05.736 fused_ordering(534) 00:17:05.736 fused_ordering(535) 00:17:05.736 fused_ordering(536) 00:17:05.736 fused_ordering(537) 00:17:05.736 fused_ordering(538) 00:17:05.736 fused_ordering(539) 00:17:05.736 fused_ordering(540) 00:17:05.736 fused_ordering(541) 00:17:05.736 fused_ordering(542) 00:17:05.736 fused_ordering(543) 00:17:05.736 fused_ordering(544) 00:17:05.736 fused_ordering(545) 00:17:05.736 fused_ordering(546) 00:17:05.736 fused_ordering(547) 00:17:05.736 fused_ordering(548) 00:17:05.736 fused_ordering(549) 00:17:05.736 fused_ordering(550) 00:17:05.736 fused_ordering(551) 00:17:05.736 fused_ordering(552) 00:17:05.736 fused_ordering(553) 00:17:05.736 fused_ordering(554) 00:17:05.736 fused_ordering(555) 00:17:05.736 fused_ordering(556) 00:17:05.736 fused_ordering(557) 00:17:05.736 fused_ordering(558) 00:17:05.736 fused_ordering(559) 00:17:05.736 fused_ordering(560) 00:17:05.736 fused_ordering(561) 00:17:05.736 fused_ordering(562) 00:17:05.736 fused_ordering(563) 00:17:05.736 fused_ordering(564) 00:17:05.736 fused_ordering(565) 00:17:05.736 fused_ordering(566) 00:17:05.736 fused_ordering(567) 00:17:05.736 fused_ordering(568) 00:17:05.736 fused_ordering(569) 00:17:05.736 fused_ordering(570) 00:17:05.736 fused_ordering(571) 00:17:05.736 fused_ordering(572) 00:17:05.736 fused_ordering(573) 00:17:05.736 fused_ordering(574) 00:17:05.736 fused_ordering(575) 00:17:05.736 fused_ordering(576) 00:17:05.736 fused_ordering(577) 00:17:05.736 fused_ordering(578) 00:17:05.736 fused_ordering(579) 00:17:05.736 fused_ordering(580) 00:17:05.736 fused_ordering(581) 00:17:05.736 fused_ordering(582) 00:17:05.736 fused_ordering(583) 00:17:05.736 fused_ordering(584) 00:17:05.736 fused_ordering(585) 00:17:05.736 fused_ordering(586) 00:17:05.736 fused_ordering(587) 00:17:05.736 fused_ordering(588) 00:17:05.736 fused_ordering(589) 00:17:05.736 fused_ordering(590) 00:17:05.736 fused_ordering(591) 00:17:05.736 fused_ordering(592) 00:17:05.736 fused_ordering(593) 00:17:05.736 fused_ordering(594) 00:17:05.736 fused_ordering(595) 00:17:05.736 fused_ordering(596) 00:17:05.736 fused_ordering(597) 00:17:05.736 fused_ordering(598) 00:17:05.736 fused_ordering(599) 00:17:05.736 fused_ordering(600) 00:17:05.736 fused_ordering(601) 00:17:05.736 fused_ordering(602) 00:17:05.736 fused_ordering(603) 00:17:05.736 fused_ordering(604) 00:17:05.736 fused_ordering(605) 00:17:05.736 fused_ordering(606) 00:17:05.736 fused_ordering(607) 00:17:05.736 fused_ordering(608) 00:17:05.736 fused_ordering(609) 00:17:05.736 fused_ordering(610) 00:17:05.736 fused_ordering(611) 00:17:05.736 fused_ordering(612) 00:17:05.736 fused_ordering(613) 00:17:05.736 fused_ordering(614) 00:17:05.736 fused_ordering(615) 00:17:05.995 fused_ordering(616) 00:17:05.995 fused_ordering(617) 00:17:05.995 fused_ordering(618) 00:17:05.995 fused_ordering(619) 00:17:05.995 fused_ordering(620) 00:17:05.995 fused_ordering(621) 00:17:05.995 fused_ordering(622) 00:17:05.995 fused_ordering(623) 00:17:05.995 fused_ordering(624) 00:17:05.995 fused_ordering(625) 00:17:05.995 fused_ordering(626) 00:17:05.995 fused_ordering(627) 00:17:05.995 fused_ordering(628) 00:17:05.995 fused_ordering(629) 00:17:05.995 fused_ordering(630) 00:17:05.995 fused_ordering(631) 00:17:05.995 fused_ordering(632) 00:17:05.995 fused_ordering(633) 00:17:05.995 fused_ordering(634) 00:17:05.995 fused_ordering(635) 00:17:05.995 fused_ordering(636) 00:17:05.995 fused_ordering(637) 00:17:05.995 fused_ordering(638) 00:17:05.995 fused_ordering(639) 00:17:05.995 fused_ordering(640) 00:17:05.995 fused_ordering(641) 00:17:05.995 fused_ordering(642) 00:17:05.995 fused_ordering(643) 00:17:05.995 fused_ordering(644) 00:17:05.995 fused_ordering(645) 00:17:05.995 fused_ordering(646) 00:17:05.995 fused_ordering(647) 00:17:05.995 fused_ordering(648) 00:17:05.995 fused_ordering(649) 00:17:05.995 fused_ordering(650) 00:17:05.995 fused_ordering(651) 00:17:05.995 fused_ordering(652) 00:17:05.995 fused_ordering(653) 00:17:05.995 fused_ordering(654) 00:17:05.995 fused_ordering(655) 00:17:05.995 fused_ordering(656) 00:17:05.995 fused_ordering(657) 00:17:05.995 fused_ordering(658) 00:17:05.995 fused_ordering(659) 00:17:05.995 fused_ordering(660) 00:17:05.995 fused_ordering(661) 00:17:05.995 fused_ordering(662) 00:17:05.995 fused_ordering(663) 00:17:05.995 fused_ordering(664) 00:17:05.995 fused_ordering(665) 00:17:05.995 fused_ordering(666) 00:17:05.995 fused_ordering(667) 00:17:05.995 fused_ordering(668) 00:17:05.995 fused_ordering(669) 00:17:05.995 fused_ordering(670) 00:17:05.995 fused_ordering(671) 00:17:05.995 fused_ordering(672) 00:17:05.995 fused_ordering(673) 00:17:05.995 fused_ordering(674) 00:17:05.995 fused_ordering(675) 00:17:05.995 fused_ordering(676) 00:17:05.995 fused_ordering(677) 00:17:05.995 fused_ordering(678) 00:17:05.995 fused_ordering(679) 00:17:05.995 fused_ordering(680) 00:17:05.995 fused_ordering(681) 00:17:05.995 fused_ordering(682) 00:17:05.995 fused_ordering(683) 00:17:05.995 fused_ordering(684) 00:17:05.995 fused_ordering(685) 00:17:05.995 fused_ordering(686) 00:17:05.995 fused_ordering(687) 00:17:05.995 fused_ordering(688) 00:17:05.995 fused_ordering(689) 00:17:05.995 fused_ordering(690) 00:17:05.995 fused_ordering(691) 00:17:05.995 fused_ordering(692) 00:17:05.995 fused_ordering(693) 00:17:05.995 fused_ordering(694) 00:17:05.995 fused_ordering(695) 00:17:05.995 fused_ordering(696) 00:17:05.995 fused_ordering(697) 00:17:05.995 fused_ordering(698) 00:17:05.995 fused_ordering(699) 00:17:05.995 fused_ordering(700) 00:17:05.995 fused_ordering(701) 00:17:05.995 fused_ordering(702) 00:17:05.995 fused_ordering(703) 00:17:05.995 fused_ordering(704) 00:17:05.995 fused_ordering(705) 00:17:05.995 fused_ordering(706) 00:17:05.995 fused_ordering(707) 00:17:05.995 fused_ordering(708) 00:17:05.995 fused_ordering(709) 00:17:05.995 fused_ordering(710) 00:17:05.995 fused_ordering(711) 00:17:05.995 fused_ordering(712) 00:17:05.995 fused_ordering(713) 00:17:05.995 fused_ordering(714) 00:17:05.995 fused_ordering(715) 00:17:05.995 fused_ordering(716) 00:17:05.995 fused_ordering(717) 00:17:05.995 fused_ordering(718) 00:17:05.995 fused_ordering(719) 00:17:05.995 fused_ordering(720) 00:17:05.995 fused_ordering(721) 00:17:05.996 fused_ordering(722) 00:17:05.996 fused_ordering(723) 00:17:05.996 fused_ordering(724) 00:17:05.996 fused_ordering(725) 00:17:05.996 fused_ordering(726) 00:17:05.996 fused_ordering(727) 00:17:05.996 fused_ordering(728) 00:17:05.996 fused_ordering(729) 00:17:05.996 fused_ordering(730) 00:17:05.996 fused_ordering(731) 00:17:05.996 fused_ordering(732) 00:17:05.996 fused_ordering(733) 00:17:05.996 fused_ordering(734) 00:17:05.996 fused_ordering(735) 00:17:05.996 fused_ordering(736) 00:17:05.996 fused_ordering(737) 00:17:05.996 fused_ordering(738) 00:17:05.996 fused_ordering(739) 00:17:05.996 fused_ordering(740) 00:17:05.996 fused_ordering(741) 00:17:05.996 fused_ordering(742) 00:17:05.996 fused_ordering(743) 00:17:05.996 fused_ordering(744) 00:17:05.996 fused_ordering(745) 00:17:05.996 fused_ordering(746) 00:17:05.996 fused_ordering(747) 00:17:05.996 fused_ordering(748) 00:17:05.996 fused_ordering(749) 00:17:05.996 fused_ordering(750) 00:17:05.996 fused_ordering(751) 00:17:05.996 fused_ordering(752) 00:17:05.996 fused_ordering(753) 00:17:05.996 fused_ordering(754) 00:17:05.996 fused_ordering(755) 00:17:05.996 fused_ordering(756) 00:17:05.996 fused_ordering(757) 00:17:05.996 fused_ordering(758) 00:17:05.996 fused_ordering(759) 00:17:05.996 fused_ordering(760) 00:17:05.996 fused_ordering(761) 00:17:05.996 fused_ordering(762) 00:17:05.996 fused_ordering(763) 00:17:05.996 fused_ordering(764) 00:17:05.996 fused_ordering(765) 00:17:05.996 fused_ordering(766) 00:17:05.996 fused_ordering(767) 00:17:05.996 fused_ordering(768) 00:17:05.996 fused_ordering(769) 00:17:05.996 fused_ordering(770) 00:17:05.996 fused_ordering(771) 00:17:05.996 fused_ordering(772) 00:17:05.996 fused_ordering(773) 00:17:05.996 fused_ordering(774) 00:17:05.996 fused_ordering(775) 00:17:05.996 fused_ordering(776) 00:17:05.996 fused_ordering(777) 00:17:05.996 fused_ordering(778) 00:17:05.996 fused_ordering(779) 00:17:05.996 fused_ordering(780) 00:17:05.996 fused_ordering(781) 00:17:05.996 fused_ordering(782) 00:17:05.996 fused_ordering(783) 00:17:05.996 fused_ordering(784) 00:17:05.996 fused_ordering(785) 00:17:05.996 fused_ordering(786) 00:17:05.996 fused_ordering(787) 00:17:05.996 fused_ordering(788) 00:17:05.996 fused_ordering(789) 00:17:05.996 fused_ordering(790) 00:17:05.996 fused_ordering(791) 00:17:05.996 fused_ordering(792) 00:17:05.996 fused_ordering(793) 00:17:05.996 fused_ordering(794) 00:17:05.996 fused_ordering(795) 00:17:05.996 fused_ordering(796) 00:17:05.996 fused_ordering(797) 00:17:05.996 fused_ordering(798) 00:17:05.996 fused_ordering(799) 00:17:05.996 fused_ordering(800) 00:17:05.996 fused_ordering(801) 00:17:05.996 fused_ordering(802) 00:17:05.996 fused_ordering(803) 00:17:05.996 fused_ordering(804) 00:17:05.996 fused_ordering(805) 00:17:05.996 fused_ordering(806) 00:17:05.996 fused_ordering(807) 00:17:05.996 fused_ordering(808) 00:17:05.996 fused_ordering(809) 00:17:05.996 fused_ordering(810) 00:17:05.996 fused_ordering(811) 00:17:05.996 fused_ordering(812) 00:17:05.996 fused_ordering(813) 00:17:05.996 fused_ordering(814) 00:17:05.996 fused_ordering(815) 00:17:05.996 fused_ordering(816) 00:17:05.996 fused_ordering(817) 00:17:05.996 fused_ordering(818) 00:17:05.996 fused_ordering(819) 00:17:05.996 fused_ordering(820) 00:17:06.564 fused_ordering(821) 00:17:06.564 fused_ordering(822) 00:17:06.564 fused_ordering(823) 00:17:06.564 fused_ordering(824) 00:17:06.564 fused_ordering(825) 00:17:06.564 fused_ordering(826) 00:17:06.564 fused_ordering(827) 00:17:06.564 fused_ordering(828) 00:17:06.564 fused_ordering(829) 00:17:06.564 fused_ordering(830) 00:17:06.564 fused_ordering(831) 00:17:06.564 fused_ordering(832) 00:17:06.564 fused_ordering(833) 00:17:06.564 fused_ordering(834) 00:17:06.564 fused_ordering(835) 00:17:06.564 fused_ordering(836) 00:17:06.564 fused_ordering(837) 00:17:06.564 fused_ordering(838) 00:17:06.564 fused_ordering(839) 00:17:06.564 fused_ordering(840) 00:17:06.564 fused_ordering(841) 00:17:06.564 fused_ordering(842) 00:17:06.564 fused_ordering(843) 00:17:06.564 fused_ordering(844) 00:17:06.564 fused_ordering(845) 00:17:06.564 fused_ordering(846) 00:17:06.564 fused_ordering(847) 00:17:06.564 fused_ordering(848) 00:17:06.564 fused_ordering(849) 00:17:06.564 fused_ordering(850) 00:17:06.564 fused_ordering(851) 00:17:06.564 fused_ordering(852) 00:17:06.564 fused_ordering(853) 00:17:06.564 fused_ordering(854) 00:17:06.564 fused_ordering(855) 00:17:06.564 fused_ordering(856) 00:17:06.564 fused_ordering(857) 00:17:06.564 fused_ordering(858) 00:17:06.564 fused_ordering(859) 00:17:06.564 fused_ordering(860) 00:17:06.564 fused_ordering(861) 00:17:06.564 fused_ordering(862) 00:17:06.564 fused_ordering(863) 00:17:06.564 fused_ordering(864) 00:17:06.564 fused_ordering(865) 00:17:06.564 fused_ordering(866) 00:17:06.564 fused_ordering(867) 00:17:06.564 fused_ordering(868) 00:17:06.564 fused_ordering(869) 00:17:06.564 fused_ordering(870) 00:17:06.564 fused_ordering(871) 00:17:06.564 fused_ordering(872) 00:17:06.564 fused_ordering(873) 00:17:06.564 fused_ordering(874) 00:17:06.564 fused_ordering(875) 00:17:06.564 fused_ordering(876) 00:17:06.564 fused_ordering(877) 00:17:06.564 fused_ordering(878) 00:17:06.564 fused_ordering(879) 00:17:06.564 fused_ordering(880) 00:17:06.564 fused_ordering(881) 00:17:06.564 fused_ordering(882) 00:17:06.564 fused_ordering(883) 00:17:06.564 fused_ordering(884) 00:17:06.564 fused_ordering(885) 00:17:06.564 fused_ordering(886) 00:17:06.564 fused_ordering(887) 00:17:06.564 fused_ordering(888) 00:17:06.564 fused_ordering(889) 00:17:06.564 fused_ordering(890) 00:17:06.564 fused_ordering(891) 00:17:06.564 fused_ordering(892) 00:17:06.564 fused_ordering(893) 00:17:06.564 fused_ordering(894) 00:17:06.564 fused_ordering(895) 00:17:06.564 fused_ordering(896) 00:17:06.564 fused_ordering(897) 00:17:06.565 fused_ordering(898) 00:17:06.565 fused_ordering(899) 00:17:06.565 fused_ordering(900) 00:17:06.565 fused_ordering(901) 00:17:06.565 fused_ordering(902) 00:17:06.565 fused_ordering(903) 00:17:06.565 fused_ordering(904) 00:17:06.565 fused_ordering(905) 00:17:06.565 fused_ordering(906) 00:17:06.565 fused_ordering(907) 00:17:06.565 fused_ordering(908) 00:17:06.565 fused_ordering(909) 00:17:06.565 fused_ordering(910) 00:17:06.565 fused_ordering(911) 00:17:06.565 fused_ordering(912) 00:17:06.565 fused_ordering(913) 00:17:06.565 fused_ordering(914) 00:17:06.565 fused_ordering(915) 00:17:06.565 fused_ordering(916) 00:17:06.565 fused_ordering(917) 00:17:06.565 fused_ordering(918) 00:17:06.565 fused_ordering(919) 00:17:06.565 fused_ordering(920) 00:17:06.565 fused_ordering(921) 00:17:06.565 fused_ordering(922) 00:17:06.565 fused_ordering(923) 00:17:06.565 fused_ordering(924) 00:17:06.565 fused_ordering(925) 00:17:06.565 fused_ordering(926) 00:17:06.565 fused_ordering(927) 00:17:06.565 fused_ordering(928) 00:17:06.565 fused_ordering(929) 00:17:06.565 fused_ordering(930) 00:17:06.565 fused_ordering(931) 00:17:06.565 fused_ordering(932) 00:17:06.565 fused_ordering(933) 00:17:06.565 fused_ordering(934) 00:17:06.565 fused_ordering(935) 00:17:06.565 fused_ordering(936) 00:17:06.565 fused_ordering(937) 00:17:06.565 fused_ordering(938) 00:17:06.565 fused_ordering(939) 00:17:06.565 fused_ordering(940) 00:17:06.565 fused_ordering(941) 00:17:06.565 fused_ordering(942) 00:17:06.565 fused_ordering(943) 00:17:06.565 fused_ordering(944) 00:17:06.565 fused_ordering(945) 00:17:06.565 fused_ordering(946) 00:17:06.565 fused_ordering(947) 00:17:06.565 fused_ordering(948) 00:17:06.565 fused_ordering(949) 00:17:06.565 fused_ordering(950) 00:17:06.565 fused_ordering(951) 00:17:06.565 fused_ordering(952) 00:17:06.565 fused_ordering(953) 00:17:06.565 fused_ordering(954) 00:17:06.565 fused_ordering(955) 00:17:06.565 fused_ordering(956) 00:17:06.565 fused_ordering(957) 00:17:06.565 fused_ordering(958) 00:17:06.565 fused_ordering(959) 00:17:06.565 fused_ordering(960) 00:17:06.565 fused_ordering(961) 00:17:06.565 fused_ordering(962) 00:17:06.565 fused_ordering(963) 00:17:06.565 fused_ordering(964) 00:17:06.565 fused_ordering(965) 00:17:06.565 fused_ordering(966) 00:17:06.565 fused_ordering(967) 00:17:06.565 fused_ordering(968) 00:17:06.565 fused_ordering(969) 00:17:06.565 fused_ordering(970) 00:17:06.565 fused_ordering(971) 00:17:06.565 fused_ordering(972) 00:17:06.565 fused_ordering(973) 00:17:06.565 fused_ordering(974) 00:17:06.565 fused_ordering(975) 00:17:06.565 fused_ordering(976) 00:17:06.565 fused_ordering(977) 00:17:06.565 fused_ordering(978) 00:17:06.565 fused_ordering(979) 00:17:06.565 fused_ordering(980) 00:17:06.565 fused_ordering(981) 00:17:06.565 fused_ordering(982) 00:17:06.565 fused_ordering(983) 00:17:06.565 fused_ordering(984) 00:17:06.565 fused_ordering(985) 00:17:06.565 fused_ordering(986) 00:17:06.565 fused_ordering(987) 00:17:06.565 fused_ordering(988) 00:17:06.565 fused_ordering(989) 00:17:06.565 fused_ordering(990) 00:17:06.565 fused_ordering(991) 00:17:06.565 fused_ordering(992) 00:17:06.565 fused_ordering(993) 00:17:06.565 fused_ordering(994) 00:17:06.565 fused_ordering(995) 00:17:06.565 fused_ordering(996) 00:17:06.565 fused_ordering(997) 00:17:06.565 fused_ordering(998) 00:17:06.565 fused_ordering(999) 00:17:06.565 fused_ordering(1000) 00:17:06.565 fused_ordering(1001) 00:17:06.565 fused_ordering(1002) 00:17:06.565 fused_ordering(1003) 00:17:06.565 fused_ordering(1004) 00:17:06.565 fused_ordering(1005) 00:17:06.565 fused_ordering(1006) 00:17:06.565 fused_ordering(1007) 00:17:06.565 fused_ordering(1008) 00:17:06.565 fused_ordering(1009) 00:17:06.565 fused_ordering(1010) 00:17:06.565 fused_ordering(1011) 00:17:06.565 fused_ordering(1012) 00:17:06.565 fused_ordering(1013) 00:17:06.565 fused_ordering(1014) 00:17:06.565 fused_ordering(1015) 00:17:06.565 fused_ordering(1016) 00:17:06.565 fused_ordering(1017) 00:17:06.565 fused_ordering(1018) 00:17:06.565 fused_ordering(1019) 00:17:06.565 fused_ordering(1020) 00:17:06.565 fused_ordering(1021) 00:17:06.565 fused_ordering(1022) 00:17:06.565 fused_ordering(1023) 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.565 rmmod nvme_tcp 00:17:06.565 rmmod nvme_fabrics 00:17:06.565 rmmod nvme_keyring 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 284613 ']' 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 284613 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 284613 ']' 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 284613 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284613 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284613' 00:17:06.565 killing process with pid 284613 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 284613 00:17:06.565 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 284613 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.825 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.730 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.731 00:17:08.731 real 0m10.520s 00:17:08.731 user 0m5.060s 00:17:08.731 sys 0m5.464s 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.731 ************************************ 00:17:08.731 END TEST nvmf_fused_ordering 00:17:08.731 ************************************ 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.731 22:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.990 ************************************ 00:17:08.990 START TEST nvmf_ns_masking 00:17:08.990 ************************************ 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:08.990 * Looking for test storage... 00:17:08.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.990 --rc genhtml_branch_coverage=1 00:17:08.990 --rc genhtml_function_coverage=1 00:17:08.990 --rc genhtml_legend=1 00:17:08.990 --rc geninfo_all_blocks=1 00:17:08.990 --rc geninfo_unexecuted_blocks=1 00:17:08.990 00:17:08.990 ' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.990 --rc genhtml_branch_coverage=1 00:17:08.990 --rc genhtml_function_coverage=1 00:17:08.990 --rc genhtml_legend=1 00:17:08.990 --rc geninfo_all_blocks=1 00:17:08.990 --rc geninfo_unexecuted_blocks=1 00:17:08.990 00:17:08.990 ' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.990 --rc genhtml_branch_coverage=1 00:17:08.990 --rc genhtml_function_coverage=1 00:17:08.990 --rc genhtml_legend=1 00:17:08.990 --rc geninfo_all_blocks=1 00:17:08.990 --rc geninfo_unexecuted_blocks=1 00:17:08.990 00:17:08.990 ' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.990 --rc genhtml_branch_coverage=1 00:17:08.990 --rc genhtml_function_coverage=1 00:17:08.990 --rc genhtml_legend=1 00:17:08.990 --rc geninfo_all_blocks=1 00:17:08.990 --rc geninfo_unexecuted_blocks=1 00:17:08.990 00:17:08.990 ' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.990 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=25ee3f28-6edb-4d5d-8b7a-ad25ee56689b 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bdeeb6f9-9431-4ddd-b36f-c7c10775a4d6 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=77b1b095-1757-4c14-b19a-0e64a5b12652 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.991 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.250 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:09.250 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:09.250 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:09.250 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:15.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.831 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:15.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:15.832 Found net devices under 0000:af:00.0: cvl_0_0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:15.832 Found net devices under 0000:af:00.1: cvl_0_1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:17:15.832 00:17:15.832 --- 10.0.0.2 ping statistics --- 00:17:15.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.832 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:15.832 00:17:15.832 --- 10.0.0.1 ping statistics --- 00:17:15.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.832 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=288459 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 288459 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 288459 ']' 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.832 [2024-12-14 22:26:35.793633] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:15.832 [2024-12-14 22:26:35.793676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.832 [2024-12-14 22:26:35.870387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.832 [2024-12-14 22:26:35.891108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.832 [2024-12-14 22:26:35.891143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.832 [2024-12-14 22:26:35.891150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.832 [2024-12-14 22:26:35.891158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.832 [2024-12-14 22:26:35.891163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.832 [2024-12-14 22:26:35.891691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.832 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.832 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.832 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:15.832 [2024-12-14 22:26:36.190591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.832 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:15.833 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:15.833 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:15.833 Malloc1 00:17:15.833 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:15.833 Malloc2 00:17:15.833 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:16.091 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:16.350 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.350 [2024-12-14 22:26:37.222319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 77b1b095-1757-4c14-b19a-0e64a5b12652 -a 10.0.0.2 -s 4420 -i 4 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:16.609 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.142 [ 0]:0x1 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476bac8909ac4e90841304216e2d4663 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476bac8909ac4e90841304216e2d4663 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.142 [ 0]:0x1 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476bac8909ac4e90841304216e2d4663 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476bac8909ac4e90841304216e2d4663 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:19.142 [ 1]:0x2 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:19.142 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.401 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.402 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:19.660 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:19.660 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 77b1b095-1757-4c14-b19a-0e64a5b12652 -a 10.0.0.2 -s 4420 -i 4 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:19.919 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.826 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.826 [ 0]:0x2 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.085 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.085 [ 0]:0x1 00:17:22.345 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.345 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476bac8909ac4e90841304216e2d4663 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476bac8909ac4e90841304216e2d4663 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:22.345 [ 1]:0x2 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.345 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:22.604 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:22.604 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:22.604 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:22.604 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:22.604 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:22.605 [ 0]:0x2 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.605 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:22.864 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:22.864 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 77b1b095-1757-4c14-b19a-0e64a5b12652 -a 10.0.0.2 -s 4420 -i 4 00:17:23.122 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:23.123 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.123 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.123 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:23.123 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:23.123 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.027 [ 0]:0x1 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.027 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476bac8909ac4e90841304216e2d4663 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476bac8909ac4e90841304216e2d4663 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.286 [ 1]:0x2 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.286 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.545 [ 0]:0x2 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:25.545 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.546 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:25.805 [2024-12-14 22:26:46.456478] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:25.805 request: 00:17:25.805 { 00:17:25.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.805 "nsid": 2, 00:17:25.805 "host": "nqn.2016-06.io.spdk:host1", 00:17:25.805 "method": "nvmf_ns_remove_host", 00:17:25.805 "req_id": 1 00:17:25.805 } 00:17:25.805 Got JSON-RPC error response 00:17:25.805 response: 00:17:25.805 { 00:17:25.805 "code": -32602, 00:17:25.805 "message": "Invalid parameters" 00:17:25.805 } 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.805 [ 0]:0x2 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5740ce0a010b47f58963d092f76661e3 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5740ce0a010b47f58963d092f76661e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290281 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290281 /var/tmp/host.sock 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 290281 ']' 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:25.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.805 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.065 [2024-12-14 22:26:46.688877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:26.065 [2024-12-14 22:26:46.688930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290281 ] 00:17:26.065 [2024-12-14 22:26:46.763953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.065 [2024-12-14 22:26:46.785933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.323 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.323 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:26.323 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.323 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:26.581 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 25ee3f28-6edb-4d5d-8b7a-ad25ee56689b 00:17:26.581 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:26.581 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 25EE3F286EDB4D5D8B7AAD25EE56689B -i 00:17:26.840 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bdeeb6f9-9431-4ddd-b36f-c7c10775a4d6 00:17:26.840 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:26.840 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BDEEB6F994314DDDB36FC7C10775A4D6 -i 00:17:27.099 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:27.099 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:27.357 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:27.357 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:27.925 nvme0n1 00:17:27.925 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.925 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.925 nvme1n2 00:17:28.184 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:28.184 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:28.184 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:28.184 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:28.184 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:28.184 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:28.184 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:28.184 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:28.184 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:28.443 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 25ee3f28-6edb-4d5d-8b7a-ad25ee56689b == \2\5\e\e\3\f\2\8\-\6\e\d\b\-\4\d\5\d\-\8\b\7\a\-\a\d\2\5\e\e\5\6\6\8\9\b ]] 00:17:28.443 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:28.443 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:28.443 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:28.702 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bdeeb6f9-9431-4ddd-b36f-c7c10775a4d6 == \b\d\e\e\b\6\f\9\-\9\4\3\1\-\4\d\d\d\-\b\3\6\f\-\c\7\c\1\0\7\7\5\a\4\d\6 ]] 00:17:28.702 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 25ee3f28-6edb-4d5d-8b7a-ad25ee56689b 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 25EE3F286EDB4D5D8B7AAD25EE56689B 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 25EE3F286EDB4D5D8B7AAD25EE56689B 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.960 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 25EE3F286EDB4D5D8B7AAD25EE56689B 00:17:29.219 [2024-12-14 22:26:49.978109] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:29.219 [2024-12-14 22:26:49.978143] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:29.219 [2024-12-14 22:26:49.978151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.219 request: 00:17:29.219 { 00:17:29.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.219 "namespace": { 00:17:29.219 "bdev_name": "invalid", 00:17:29.219 "nsid": 1, 00:17:29.219 "nguid": "25EE3F286EDB4D5D8B7AAD25EE56689B", 00:17:29.219 "no_auto_visible": false, 00:17:29.219 "hide_metadata": false 00:17:29.219 }, 00:17:29.219 "method": "nvmf_subsystem_add_ns", 00:17:29.219 "req_id": 1 00:17:29.219 } 00:17:29.219 Got JSON-RPC error response 00:17:29.219 response: 00:17:29.219 { 00:17:29.219 "code": -32602, 00:17:29.219 "message": "Invalid parameters" 00:17:29.219 } 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 25ee3f28-6edb-4d5d-8b7a-ad25ee56689b 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:29.219 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 25EE3F286EDB4D5D8B7AAD25EE56689B -i 00:17:29.478 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:31.382 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:31.382 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:31.382 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290281 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 290281 ']' 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 290281 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290281 00:17:31.641 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.642 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.642 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290281' 00:17:31.642 killing process with pid 290281 00:17:31.642 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 290281 00:17:31.642 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 290281 00:17:31.900 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.160 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.160 rmmod nvme_tcp 00:17:32.160 rmmod nvme_fabrics 00:17:32.160 rmmod nvme_keyring 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 288459 ']' 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 288459 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 288459 ']' 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 288459 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.160 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288459 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288459' 00:17:32.420 killing process with pid 288459 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 288459 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 288459 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.420 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.047 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.047 00:17:35.047 real 0m25.728s 00:17:35.047 user 0m30.853s 00:17:35.047 sys 0m6.973s 00:17:35.047 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 ************************************ 00:17:35.048 END TEST nvmf_ns_masking 00:17:35.048 ************************************ 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 ************************************ 00:17:35.048 START TEST nvmf_nvme_cli 00:17:35.048 ************************************ 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:35.048 * Looking for test storage... 00:17:35.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:35.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.048 --rc genhtml_branch_coverage=1 00:17:35.048 --rc genhtml_function_coverage=1 00:17:35.048 --rc genhtml_legend=1 00:17:35.048 --rc geninfo_all_blocks=1 00:17:35.048 --rc geninfo_unexecuted_blocks=1 00:17:35.048 00:17:35.048 ' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:35.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.048 --rc genhtml_branch_coverage=1 00:17:35.048 --rc genhtml_function_coverage=1 00:17:35.048 --rc genhtml_legend=1 00:17:35.048 --rc geninfo_all_blocks=1 00:17:35.048 --rc geninfo_unexecuted_blocks=1 00:17:35.048 00:17:35.048 ' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:35.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.048 --rc genhtml_branch_coverage=1 00:17:35.048 --rc genhtml_function_coverage=1 00:17:35.048 --rc genhtml_legend=1 00:17:35.048 --rc geninfo_all_blocks=1 00:17:35.048 --rc geninfo_unexecuted_blocks=1 00:17:35.048 00:17:35.048 ' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:35.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.048 --rc genhtml_branch_coverage=1 00:17:35.048 --rc genhtml_function_coverage=1 00:17:35.048 --rc genhtml_legend=1 00:17:35.048 --rc geninfo_all_blocks=1 00:17:35.048 --rc geninfo_unexecuted_blocks=1 00:17:35.048 00:17:35.048 ' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.048 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.049 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.662 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:41.663 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:41.663 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:41.663 Found net devices under 0000:af:00.0: cvl_0_0 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:41.663 Found net devices under 0000:af:00.1: cvl_0_1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:17:41.663 00:17:41.663 --- 10.0.0.2 ping statistics --- 00:17:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.663 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:17:41.663 00:17:41.663 --- 10.0.0.1 ping statistics --- 00:17:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.663 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=295009 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 295009 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 295009 ']' 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.663 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 [2024-12-14 22:27:01.580435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:41.664 [2024-12-14 22:27:01.580479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.664 [2024-12-14 22:27:01.660312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.664 [2024-12-14 22:27:01.685721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.664 [2024-12-14 22:27:01.685755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.664 [2024-12-14 22:27:01.685763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.664 [2024-12-14 22:27:01.685769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.664 [2024-12-14 22:27:01.685775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.664 [2024-12-14 22:27:01.687152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.664 [2024-12-14 22:27:01.687181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.664 [2024-12-14 22:27:01.687207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.664 [2024-12-14 22:27:01.687209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 [2024-12-14 22:27:01.815218] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 Malloc0 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 Malloc1 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 [2024-12-14 22:27:01.904072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.664 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:41.664 00:17:41.664 Discovery Log Number of Records 2, Generation counter 2 00:17:41.664 =====Discovery Log Entry 0====== 00:17:41.664 trtype: tcp 00:17:41.664 adrfam: ipv4 00:17:41.664 subtype: current discovery subsystem 00:17:41.664 treq: not required 00:17:41.664 portid: 0 00:17:41.664 trsvcid: 4420 00:17:41.664 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:41.664 traddr: 10.0.0.2 00:17:41.664 eflags: explicit discovery connections, duplicate discovery information 00:17:41.664 sectype: none 00:17:41.664 =====Discovery Log Entry 1====== 00:17:41.664 trtype: tcp 00:17:41.664 adrfam: ipv4 00:17:41.664 subtype: nvme subsystem 00:17:41.664 treq: not required 00:17:41.664 portid: 0 00:17:41.664 trsvcid: 4420 00:17:41.664 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:41.664 traddr: 10.0.0.2 00:17:41.664 eflags: none 00:17:41.664 sectype: none 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:41.664 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:42.613 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:44.520 /dev/nvme0n2 ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.520 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.521 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.521 rmmod nvme_tcp 00:17:44.521 rmmod nvme_fabrics 00:17:44.521 rmmod nvme_keyring 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 295009 ']' 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 295009 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 295009 ']' 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 295009 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295009 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295009' 00:17:44.781 killing process with pid 295009 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 295009 00:17:44.781 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 295009 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.042 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:46.967 00:17:46.967 real 0m12.325s 00:17:46.967 user 0m17.555s 00:17:46.967 sys 0m5.057s 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.967 ************************************ 00:17:46.967 END TEST nvmf_nvme_cli 00:17:46.967 ************************************ 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.967 22:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:46.967 ************************************ 00:17:46.968 START TEST nvmf_vfio_user 00:17:46.968 ************************************ 00:17:46.968 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:47.234 * Looking for test storage... 00:17:47.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.234 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.234 --rc genhtml_branch_coverage=1 00:17:47.234 --rc genhtml_function_coverage=1 00:17:47.234 --rc genhtml_legend=1 00:17:47.234 --rc geninfo_all_blocks=1 00:17:47.234 --rc geninfo_unexecuted_blocks=1 00:17:47.234 00:17:47.234 ' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.234 --rc genhtml_branch_coverage=1 00:17:47.234 --rc genhtml_function_coverage=1 00:17:47.234 --rc genhtml_legend=1 00:17:47.234 --rc geninfo_all_blocks=1 00:17:47.234 --rc geninfo_unexecuted_blocks=1 00:17:47.234 00:17:47.234 ' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.234 --rc genhtml_branch_coverage=1 00:17:47.234 --rc genhtml_function_coverage=1 00:17:47.234 --rc genhtml_legend=1 00:17:47.234 --rc geninfo_all_blocks=1 00:17:47.234 --rc geninfo_unexecuted_blocks=1 00:17:47.234 00:17:47.234 ' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.234 --rc genhtml_branch_coverage=1 00:17:47.234 --rc genhtml_function_coverage=1 00:17:47.234 --rc genhtml_legend=1 00:17:47.234 --rc geninfo_all_blocks=1 00:17:47.234 --rc geninfo_unexecuted_blocks=1 00:17:47.234 00:17:47.234 ' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.234 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296684 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296684' 00:17:47.235 Process pid: 296684 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296684 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296684 ']' 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.235 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:47.235 [2024-12-14 22:27:08.089531] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:47.235 [2024-12-14 22:27:08.089582] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.505 [2024-12-14 22:27:08.164468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.505 [2024-12-14 22:27:08.187384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.505 [2024-12-14 22:27:08.187422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.505 [2024-12-14 22:27:08.187430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.505 [2024-12-14 22:27:08.187437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.505 [2024-12-14 22:27:08.187443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.505 [2024-12-14 22:27:08.188892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.505 [2024-12-14 22:27:08.189001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.505 [2024-12-14 22:27:08.189002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.505 [2024-12-14 22:27:08.188940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.505 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.505 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:47.505 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:48.445 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:48.705 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:48.705 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:48.705 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:48.705 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:48.705 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:48.966 Malloc1 00:17:48.966 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:49.227 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:49.488 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:49.489 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:49.489 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:49.489 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:49.751 Malloc2 00:17:49.751 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:50.012 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:50.287 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:50.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:50.287 [2024-12-14 22:27:11.152195] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:50.287 [2024-12-14 22:27:11.152242] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297174 ] 00:17:50.583 [2024-12-14 22:27:11.192222] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:50.583 [2024-12-14 22:27:11.197531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:50.583 [2024-12-14 22:27:11.197549] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f711dfbc000 00:17:50.583 [2024-12-14 22:27:11.198529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.199519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.200523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.201531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.202543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.203540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.204543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:50.583 [2024-12-14 22:27:11.205547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:50.584 [2024-12-14 22:27:11.206560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:50.584 [2024-12-14 22:27:11.206569] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f711ccc5000 00:17:50.584 [2024-12-14 22:27:11.207472] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:50.584 [2024-12-14 22:27:11.216858] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:50.584 [2024-12-14 22:27:11.216885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:50.584 [2024-12-14 22:27:11.221657] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:50.584 [2024-12-14 22:27:11.221690] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:50.584 [2024-12-14 22:27:11.221761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:50.584 [2024-12-14 22:27:11.221775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:50.584 [2024-12-14 22:27:11.221780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:50.584 [2024-12-14 22:27:11.222659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:50.584 [2024-12-14 22:27:11.222668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:50.584 [2024-12-14 22:27:11.222674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:50.584 [2024-12-14 22:27:11.223662] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:50.584 [2024-12-14 22:27:11.223669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:50.584 [2024-12-14 22:27:11.223675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.224664] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:50.584 [2024-12-14 22:27:11.224671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.225671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:50.584 [2024-12-14 22:27:11.225678] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:50.584 [2024-12-14 22:27:11.225685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.225691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.225799] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:50.584 [2024-12-14 22:27:11.225804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.225808] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:50.584 [2024-12-14 22:27:11.226681] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:50.584 [2024-12-14 22:27:11.227681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:50.584 [2024-12-14 22:27:11.228690] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:50.584 [2024-12-14 22:27:11.229692] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:50.584 [2024-12-14 22:27:11.229767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.584 [2024-12-14 22:27:11.230709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:50.584 [2024-12-14 22:27:11.230716] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.584 [2024-12-14 22:27:11.230720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:50.584 [2024-12-14 22:27:11.230747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230757] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:50.584 [2024-12-14 22:27:11.230762] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:50.584 [2024-12-14 22:27:11.230765] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.584 [2024-12-14 22:27:11.230776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.230821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:50.584 [2024-12-14 22:27:11.230829] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:50.584 [2024-12-14 22:27:11.230833] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:50.584 [2024-12-14 22:27:11.230837] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:50.584 [2024-12-14 22:27:11.230841] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:50.584 [2024-12-14 22:27:11.230845] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:50.584 [2024-12-14 22:27:11.230849] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:50.584 [2024-12-14 22:27:11.230855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.230887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:50.584 [2024-12-14 22:27:11.230897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.584 [2024-12-14 22:27:11.230907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.584 [2024-12-14 22:27:11.230914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.584 [2024-12-14 22:27:11.230922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.584 [2024-12-14 22:27:11.230926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.230952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:50.584 [2024-12-14 22:27:11.230958] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:50.584 [2024-12-14 22:27:11.230962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.230980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.230990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:50.584 [2024-12-14 22:27:11.231037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.231045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.231052] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:50.584 [2024-12-14 22:27:11.231056] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:50.584 [2024-12-14 22:27:11.231059] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.584 [2024-12-14 22:27:11.231064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.231075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:50.584 [2024-12-14 22:27:11.231083] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:50.584 [2024-12-14 22:27:11.231090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.231097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:50.584 [2024-12-14 22:27:11.231103] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:50.584 [2024-12-14 22:27:11.231107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:50.584 [2024-12-14 22:27:11.231110] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.584 [2024-12-14 22:27:11.231115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:50.584 [2024-12-14 22:27:11.231139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231162] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:50.585 [2024-12-14 22:27:11.231165] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:50.585 [2024-12-14 22:27:11.231168] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.585 [2024-12-14 22:27:11.231174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231226] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:50.585 [2024-12-14 22:27:11.231230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:50.585 [2024-12-14 22:27:11.231235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:50.585 [2024-12-14 22:27:11.231249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231327] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:50.585 [2024-12-14 22:27:11.231331] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:50.585 [2024-12-14 22:27:11.231334] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:50.585 [2024-12-14 22:27:11.231337] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:50.585 [2024-12-14 22:27:11.231340] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:50.585 [2024-12-14 22:27:11.231345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:50.585 [2024-12-14 22:27:11.231351] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:50.585 [2024-12-14 22:27:11.231355] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:50.585 [2024-12-14 22:27:11.231358] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.585 [2024-12-14 22:27:11.231364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231369] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:50.585 [2024-12-14 22:27:11.231373] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:50.585 [2024-12-14 22:27:11.231376] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.585 [2024-12-14 22:27:11.231381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231388] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:50.585 [2024-12-14 22:27:11.231392] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:50.585 [2024-12-14 22:27:11.231394] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:50.585 [2024-12-14 22:27:11.231399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:50.585 [2024-12-14 22:27:11.231405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:50.585 [2024-12-14 22:27:11.231434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:50.585 ===================================================== 00:17:50.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:50.585 ===================================================== 00:17:50.585 Controller Capabilities/Features 00:17:50.585 ================================ 00:17:50.585 Vendor ID: 4e58 00:17:50.585 Subsystem Vendor ID: 4e58 00:17:50.585 Serial Number: SPDK1 00:17:50.585 Model Number: SPDK bdev Controller 00:17:50.585 Firmware Version: 25.01 00:17:50.585 Recommended Arb Burst: 6 00:17:50.585 IEEE OUI Identifier: 8d 6b 50 00:17:50.585 Multi-path I/O 00:17:50.585 May have multiple subsystem ports: Yes 00:17:50.585 May have multiple controllers: Yes 00:17:50.585 Associated with SR-IOV VF: No 00:17:50.585 Max Data Transfer Size: 131072 00:17:50.585 Max Number of Namespaces: 32 00:17:50.585 Max Number of I/O Queues: 127 00:17:50.585 NVMe Specification Version (VS): 1.3 00:17:50.585 NVMe Specification Version (Identify): 1.3 00:17:50.585 Maximum Queue Entries: 256 00:17:50.585 Contiguous Queues Required: Yes 00:17:50.585 Arbitration Mechanisms Supported 00:17:50.585 Weighted Round Robin: Not Supported 00:17:50.585 Vendor Specific: Not Supported 00:17:50.585 Reset Timeout: 15000 ms 00:17:50.585 Doorbell Stride: 4 bytes 00:17:50.585 NVM Subsystem Reset: Not Supported 00:17:50.585 Command Sets Supported 00:17:50.585 NVM Command Set: Supported 00:17:50.585 Boot Partition: Not Supported 00:17:50.585 Memory Page Size Minimum: 4096 bytes 00:17:50.585 Memory Page Size Maximum: 4096 bytes 00:17:50.585 Persistent Memory Region: Not Supported 00:17:50.585 Optional Asynchronous Events Supported 00:17:50.585 Namespace Attribute Notices: Supported 00:17:50.585 Firmware Activation Notices: Not Supported 00:17:50.585 ANA Change Notices: Not Supported 00:17:50.585 PLE Aggregate Log Change Notices: Not Supported 00:17:50.585 LBA Status Info Alert Notices: Not Supported 00:17:50.585 EGE Aggregate Log Change Notices: Not Supported 00:17:50.585 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.585 Zone Descriptor Change Notices: Not Supported 00:17:50.585 Discovery Log Change Notices: Not Supported 00:17:50.585 Controller Attributes 00:17:50.585 128-bit Host Identifier: Supported 00:17:50.585 Non-Operational Permissive Mode: Not Supported 00:17:50.585 NVM Sets: Not Supported 00:17:50.585 Read Recovery Levels: Not Supported 00:17:50.585 Endurance Groups: Not Supported 00:17:50.585 Predictable Latency Mode: Not Supported 00:17:50.585 Traffic Based Keep ALive: Not Supported 00:17:50.585 Namespace Granularity: Not Supported 00:17:50.585 SQ Associations: Not Supported 00:17:50.585 UUID List: Not Supported 00:17:50.585 Multi-Domain Subsystem: Not Supported 00:17:50.585 Fixed Capacity Management: Not Supported 00:17:50.585 Variable Capacity Management: Not Supported 00:17:50.585 Delete Endurance Group: Not Supported 00:17:50.585 Delete NVM Set: Not Supported 00:17:50.585 Extended LBA Formats Supported: Not Supported 00:17:50.585 Flexible Data Placement Supported: Not Supported 00:17:50.585 00:17:50.585 Controller Memory Buffer Support 00:17:50.585 ================================ 00:17:50.585 Supported: No 00:17:50.585 00:17:50.585 Persistent Memory Region Support 00:17:50.585 ================================ 00:17:50.585 Supported: No 00:17:50.585 00:17:50.585 Admin Command Set Attributes 00:17:50.585 ============================ 00:17:50.585 Security Send/Receive: Not Supported 00:17:50.586 Format NVM: Not Supported 00:17:50.586 Firmware Activate/Download: Not Supported 00:17:50.586 Namespace Management: Not Supported 00:17:50.586 Device Self-Test: Not Supported 00:17:50.586 Directives: Not Supported 00:17:50.586 NVMe-MI: Not Supported 00:17:50.586 Virtualization Management: Not Supported 00:17:50.586 Doorbell Buffer Config: Not Supported 00:17:50.586 Get LBA Status Capability: Not Supported 00:17:50.586 Command & Feature Lockdown Capability: Not Supported 00:17:50.586 Abort Command Limit: 4 00:17:50.586 Async Event Request Limit: 4 00:17:50.586 Number of Firmware Slots: N/A 00:17:50.586 Firmware Slot 1 Read-Only: N/A 00:17:50.586 Firmware Activation Without Reset: N/A 00:17:50.586 Multiple Update Detection Support: N/A 00:17:50.586 Firmware Update Granularity: No Information Provided 00:17:50.586 Per-Namespace SMART Log: No 00:17:50.586 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:50.586 Command Effects Log Page: Supported 00:17:50.586 Get Log Page Extended Data: Supported 00:17:50.586 Telemetry Log Pages: Not Supported 00:17:50.586 Persistent Event Log Pages: Not Supported 00:17:50.586 Supported Log Pages Log Page: May Support 00:17:50.586 Commands Supported & Effects Log Page: Not Supported 00:17:50.586 Feature Identifiers & Effects Log Page:May Support 00:17:50.586 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.586 Data Area 4 for Telemetry Log: Not Supported 00:17:50.586 Error Log Page Entries Supported: 128 00:17:50.586 Keep Alive: Supported 00:17:50.586 Keep Alive Granularity: 10000 ms 00:17:50.586 00:17:50.586 NVM Command Set Attributes 00:17:50.586 ========================== 00:17:50.586 Submission Queue Entry Size 00:17:50.586 Max: 64 00:17:50.586 Min: 64 00:17:50.586 Completion Queue Entry Size 00:17:50.586 Max: 16 00:17:50.586 Min: 16 00:17:50.586 Number of Namespaces: 32 00:17:50.586 Compare Command: Supported 00:17:50.586 Write Uncorrectable Command: Not Supported 00:17:50.586 Dataset Management Command: Supported 00:17:50.586 Write Zeroes Command: Supported 00:17:50.586 Set Features Save Field: Not Supported 00:17:50.586 Reservations: Not Supported 00:17:50.586 Timestamp: Not Supported 00:17:50.586 Copy: Supported 00:17:50.586 Volatile Write Cache: Present 00:17:50.586 Atomic Write Unit (Normal): 1 00:17:50.586 Atomic Write Unit (PFail): 1 00:17:50.586 Atomic Compare & Write Unit: 1 00:17:50.586 Fused Compare & Write: Supported 00:17:50.586 Scatter-Gather List 00:17:50.586 SGL Command Set: Supported (Dword aligned) 00:17:50.586 SGL Keyed: Not Supported 00:17:50.586 SGL Bit Bucket Descriptor: Not Supported 00:17:50.586 SGL Metadata Pointer: Not Supported 00:17:50.586 Oversized SGL: Not Supported 00:17:50.586 SGL Metadata Address: Not Supported 00:17:50.586 SGL Offset: Not Supported 00:17:50.586 Transport SGL Data Block: Not Supported 00:17:50.586 Replay Protected Memory Block: Not Supported 00:17:50.586 00:17:50.586 Firmware Slot Information 00:17:50.586 ========================= 00:17:50.586 Active slot: 1 00:17:50.586 Slot 1 Firmware Revision: 25.01 00:17:50.586 00:17:50.586 00:17:50.586 Commands Supported and Effects 00:17:50.586 ============================== 00:17:50.586 Admin Commands 00:17:50.586 -------------- 00:17:50.586 Get Log Page (02h): Supported 00:17:50.586 Identify (06h): Supported 00:17:50.586 Abort (08h): Supported 00:17:50.586 Set Features (09h): Supported 00:17:50.586 Get Features (0Ah): Supported 00:17:50.586 Asynchronous Event Request (0Ch): Supported 00:17:50.586 Keep Alive (18h): Supported 00:17:50.586 I/O Commands 00:17:50.586 ------------ 00:17:50.586 Flush (00h): Supported LBA-Change 00:17:50.586 Write (01h): Supported LBA-Change 00:17:50.586 Read (02h): Supported 00:17:50.586 Compare (05h): Supported 00:17:50.586 Write Zeroes (08h): Supported LBA-Change 00:17:50.586 Dataset Management (09h): Supported LBA-Change 00:17:50.586 Copy (19h): Supported LBA-Change 00:17:50.586 00:17:50.586 Error Log 00:17:50.586 ========= 00:17:50.586 00:17:50.586 Arbitration 00:17:50.586 =========== 00:17:50.586 Arbitration Burst: 1 00:17:50.586 00:17:50.586 Power Management 00:17:50.586 ================ 00:17:50.586 Number of Power States: 1 00:17:50.586 Current Power State: Power State #0 00:17:50.586 Power State #0: 00:17:50.586 Max Power: 0.00 W 00:17:50.586 Non-Operational State: Operational 00:17:50.586 Entry Latency: Not Reported 00:17:50.586 Exit Latency: Not Reported 00:17:50.586 Relative Read Throughput: 0 00:17:50.586 Relative Read Latency: 0 00:17:50.586 Relative Write Throughput: 0 00:17:50.586 Relative Write Latency: 0 00:17:50.586 Idle Power: Not Reported 00:17:50.586 Active Power: Not Reported 00:17:50.586 Non-Operational Permissive Mode: Not Supported 00:17:50.586 00:17:50.586 Health Information 00:17:50.586 ================== 00:17:50.586 Critical Warnings: 00:17:50.586 Available Spare Space: OK 00:17:50.586 Temperature: OK 00:17:50.586 Device Reliability: OK 00:17:50.586 Read Only: No 00:17:50.586 Volatile Memory Backup: OK 00:17:50.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:50.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:50.586 Available Spare: 0% 00:17:50.586 Available Sp[2024-12-14 22:27:11.231515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:50.586 [2024-12-14 22:27:11.231524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:50.586 [2024-12-14 22:27:11.231549] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:50.586 [2024-12-14 22:27:11.231557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.586 [2024-12-14 22:27:11.231562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.586 [2024-12-14 22:27:11.231568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.586 [2024-12-14 22:27:11.231573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.586 [2024-12-14 22:27:11.234909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:50.586 [2024-12-14 22:27:11.234920] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:50.586 [2024-12-14 22:27:11.235730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:50.586 [2024-12-14 22:27:11.235775] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:50.586 [2024-12-14 22:27:11.235781] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:50.586 [2024-12-14 22:27:11.236737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:50.586 [2024-12-14 22:27:11.236747] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:50.586 [2024-12-14 22:27:11.236805] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:50.586 [2024-12-14 22:27:11.237764] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:50.586 are Threshold: 0% 00:17:50.586 Life Percentage Used: 0% 00:17:50.586 Data Units Read: 0 00:17:50.586 Data Units Written: 0 00:17:50.586 Host Read Commands: 0 00:17:50.586 Host Write Commands: 0 00:17:50.586 Controller Busy Time: 0 minutes 00:17:50.586 Power Cycles: 0 00:17:50.586 Power On Hours: 0 hours 00:17:50.586 Unsafe Shutdowns: 0 00:17:50.586 Unrecoverable Media Errors: 0 00:17:50.586 Lifetime Error Log Entries: 0 00:17:50.586 Warning Temperature Time: 0 minutes 00:17:50.586 Critical Temperature Time: 0 minutes 00:17:50.586 00:17:50.586 Number of Queues 00:17:50.586 ================ 00:17:50.586 Number of I/O Submission Queues: 127 00:17:50.586 Number of I/O Completion Queues: 127 00:17:50.586 00:17:50.586 Active Namespaces 00:17:50.586 ================= 00:17:50.586 Namespace ID:1 00:17:50.586 Error Recovery Timeout: Unlimited 00:17:50.586 Command Set Identifier: NVM (00h) 00:17:50.586 Deallocate: Supported 00:17:50.586 Deallocated/Unwritten Error: Not Supported 00:17:50.586 Deallocated Read Value: Unknown 00:17:50.586 Deallocate in Write Zeroes: Not Supported 00:17:50.586 Deallocated Guard Field: 0xFFFF 00:17:50.586 Flush: Supported 00:17:50.586 Reservation: Supported 00:17:50.586 Namespace Sharing Capabilities: Multiple Controllers 00:17:50.586 Size (in LBAs): 131072 (0GiB) 00:17:50.586 Capacity (in LBAs): 131072 (0GiB) 00:17:50.586 Utilization (in LBAs): 131072 (0GiB) 00:17:50.586 NGUID: BC176D04F1D44ADFB3229A5993C39021 00:17:50.586 UUID: bc176d04-f1d4-4adf-b322-9a5993c39021 00:17:50.586 Thin Provisioning: Not Supported 00:17:50.586 Per-NS Atomic Units: Yes 00:17:50.586 Atomic Boundary Size (Normal): 0 00:17:50.586 Atomic Boundary Size (PFail): 0 00:17:50.586 Atomic Boundary Offset: 0 00:17:50.587 Maximum Single Source Range Length: 65535 00:17:50.587 Maximum Copy Length: 65535 00:17:50.587 Maximum Source Range Count: 1 00:17:50.587 NGUID/EUI64 Never Reused: No 00:17:50.587 Namespace Write Protected: No 00:17:50.587 Number of LBA Formats: 1 00:17:50.587 Current LBA Format: LBA Format #00 00:17:50.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:50.587 00:17:50.587 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:50.587 [2024-12-14 22:27:11.462753] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:55.948 Initializing NVMe Controllers 00:17:55.948 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:55.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:55.948 Initialization complete. Launching workers. 00:17:55.948 ======================================================== 00:17:55.948 Latency(us) 00:17:55.948 Device Information : IOPS MiB/s Average min max 00:17:55.948 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39887.07 155.81 3208.90 958.60 6656.16 00:17:55.948 ======================================================== 00:17:55.948 Total : 39887.07 155.81 3208.90 958.60 6656.16 00:17:55.948 00:17:55.948 [2024-12-14 22:27:16.481941] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:55.948 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:55.948 [2024-12-14 22:27:16.720074] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:01.316 Initializing NVMe Controllers 00:18:01.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:01.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:01.316 Initialization complete. Launching workers. 00:18:01.316 ======================================================== 00:18:01.316 Latency(us) 00:18:01.316 Device Information : IOPS MiB/s Average min max 00:18:01.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.31 62.65 7986.26 3990.97 11970.54 00:18:01.316 ======================================================== 00:18:01.316 Total : 16038.31 62.65 7986.26 3990.97 11970.54 00:18:01.316 00:18:01.316 [2024-12-14 22:27:21.766024] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:01.316 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:01.316 [2024-12-14 22:27:21.975977] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.683 [2024-12-14 22:27:27.059201] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.683 Initializing NVMe Controllers 00:18:06.683 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:06.683 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:06.683 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:06.683 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:06.683 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:06.683 Initialization complete. Launching workers. 00:18:06.683 Starting thread on core 2 00:18:06.683 Starting thread on core 3 00:18:06.683 Starting thread on core 1 00:18:06.683 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:06.683 [2024-12-14 22:27:27.349910] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.077 [2024-12-14 22:27:30.507128] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.077 Initializing NVMe Controllers 00:18:10.077 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.077 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.077 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:10.077 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:10.077 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:10.077 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:10.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:10.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:10.077 Initialization complete. Launching workers. 00:18:10.077 Starting thread on core 1 with urgent priority queue 00:18:10.077 Starting thread on core 2 with urgent priority queue 00:18:10.077 Starting thread on core 3 with urgent priority queue 00:18:10.077 Starting thread on core 0 with urgent priority queue 00:18:10.077 SPDK bdev Controller (SPDK1 ) core 0: 7170.33 IO/s 13.95 secs/100000 ios 00:18:10.077 SPDK bdev Controller (SPDK1 ) core 1: 7994.00 IO/s 12.51 secs/100000 ios 00:18:10.077 SPDK bdev Controller (SPDK1 ) core 2: 9490.00 IO/s 10.54 secs/100000 ios 00:18:10.077 SPDK bdev Controller (SPDK1 ) core 3: 7988.67 IO/s 12.52 secs/100000 ios 00:18:10.077 ======================================================== 00:18:10.077 00:18:10.077 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:10.077 [2024-12-14 22:27:30.791350] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.077 Initializing NVMe Controllers 00:18:10.077 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.077 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.077 Namespace ID: 1 size: 0GB 00:18:10.077 Initialization complete. 00:18:10.077 INFO: using host memory buffer for IO 00:18:10.077 Hello world! 00:18:10.077 [2024-12-14 22:27:30.825575] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.077 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:10.362 [2024-12-14 22:27:31.096687] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:11.328 Initializing NVMe Controllers 00:18:11.328 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:11.328 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:11.328 Initialization complete. Launching workers. 00:18:11.328 submit (in ns) avg, min, max = 7482.3, 3142.9, 4164340.0 00:18:11.328 complete (in ns) avg, min, max = 21040.9, 1722.9, 4005425.7 00:18:11.328 00:18:11.328 Submit histogram 00:18:11.328 ================ 00:18:11.328 Range in us Cumulative Count 00:18:11.328 3.139 - 3.154: 0.0061% ( 1) 00:18:11.328 3.154 - 3.170: 0.0244% ( 3) 00:18:11.328 3.170 - 3.185: 0.0305% ( 1) 00:18:11.328 3.185 - 3.200: 0.0548% ( 4) 00:18:11.328 3.200 - 3.215: 0.5787% ( 86) 00:18:11.328 3.215 - 3.230: 3.0155% ( 400) 00:18:11.328 3.230 - 3.246: 8.0963% ( 834) 00:18:11.328 3.246 - 3.261: 13.7313% ( 925) 00:18:11.328 3.261 - 3.276: 21.1331% ( 1215) 00:18:11.328 3.276 - 3.291: 28.8943% ( 1274) 00:18:11.328 3.291 - 3.307: 34.6695% ( 948) 00:18:11.328 3.307 - 3.322: 39.8294% ( 847) 00:18:11.328 3.322 - 3.337: 44.9528% ( 841) 00:18:11.328 3.337 - 3.352: 49.7167% ( 782) 00:18:11.328 3.352 - 3.368: 53.6217% ( 641) 00:18:11.328 3.368 - 3.383: 59.4152% ( 951) 00:18:11.328 3.383 - 3.398: 65.6412% ( 1022) 00:18:11.328 3.398 - 3.413: 71.0570% ( 889) 00:18:11.328 3.413 - 3.429: 76.7164% ( 929) 00:18:11.328 3.429 - 3.444: 81.3707% ( 764) 00:18:11.328 3.444 - 3.459: 84.3984% ( 497) 00:18:11.328 3.459 - 3.474: 86.1956% ( 295) 00:18:11.328 3.474 - 3.490: 87.1581% ( 158) 00:18:11.328 3.490 - 3.505: 87.7977% ( 105) 00:18:11.328 3.505 - 3.520: 88.3582% ( 92) 00:18:11.328 3.520 - 3.535: 89.1563% ( 131) 00:18:11.328 3.535 - 3.550: 89.9421% ( 129) 00:18:11.328 3.550 - 3.566: 90.9168% ( 160) 00:18:11.328 3.566 - 3.581: 91.8124% ( 147) 00:18:11.328 3.581 - 3.596: 92.6713% ( 141) 00:18:11.328 3.596 - 3.611: 93.4024% ( 120) 00:18:11.328 3.611 - 3.627: 94.1334% ( 120) 00:18:11.328 3.627 - 3.642: 95.1081% ( 160) 00:18:11.328 3.642 - 3.657: 95.9732% ( 142) 00:18:11.328 3.657 - 3.672: 96.8078% ( 137) 00:18:11.328 3.672 - 3.688: 97.5206% ( 117) 00:18:11.328 3.688 - 3.703: 98.0688% ( 90) 00:18:11.328 3.703 - 3.718: 98.4709% ( 66) 00:18:11.328 3.718 - 3.733: 98.7633% ( 48) 00:18:11.328 3.733 - 3.749: 99.0740% ( 51) 00:18:11.328 3.749 - 3.764: 99.2568% ( 30) 00:18:11.328 3.764 - 3.779: 99.4274% ( 28) 00:18:11.328 3.779 - 3.794: 99.5126% ( 14) 00:18:11.328 3.794 - 3.810: 99.5614% ( 8) 00:18:11.328 3.810 - 3.825: 99.6101% ( 8) 00:18:11.328 3.825 - 3.840: 99.6345% ( 4) 00:18:11.328 3.840 - 3.855: 99.6467% ( 2) 00:18:11.328 3.855 - 3.870: 99.6528% ( 1) 00:18:11.328 3.870 - 3.886: 99.6588% ( 1) 00:18:11.328 3.962 - 3.992: 99.6649% ( 1) 00:18:11.329 4.815 - 4.846: 99.6710% ( 1) 00:18:11.329 5.150 - 5.181: 99.6771% ( 1) 00:18:11.329 5.242 - 5.272: 99.6832% ( 1) 00:18:11.329 5.303 - 5.333: 99.6893% ( 1) 00:18:11.329 5.394 - 5.425: 99.6954% ( 1) 00:18:11.329 5.425 - 5.455: 99.7015% ( 1) 00:18:11.329 5.455 - 5.486: 99.7076% ( 1) 00:18:11.329 5.577 - 5.608: 99.7137% ( 1) 00:18:11.329 5.608 - 5.638: 99.7198% ( 1) 00:18:11.329 5.699 - 5.730: 99.7259% ( 1) 00:18:11.329 5.730 - 5.760: 99.7320% ( 1) 00:18:11.329 5.882 - 5.912: 99.7380% ( 1) 00:18:11.329 5.973 - 6.004: 99.7563% ( 3) 00:18:11.329 6.004 - 6.034: 99.7624% ( 1) 00:18:11.329 6.187 - 6.217: 99.7685% ( 1) 00:18:11.329 6.217 - 6.248: 99.7746% ( 1) 00:18:11.329 6.491 - 6.522: 99.7868% ( 2) 00:18:11.329 6.583 - 6.613: 99.7990% ( 2) 00:18:11.329 6.644 - 6.674: 99.8111% ( 2) 00:18:11.329 6.766 - 6.796: 99.8172% ( 1) 00:18:11.329 6.796 - 6.827: 99.8233% ( 1) 00:18:11.329 7.131 - 7.162: 99.8355% ( 2) 00:18:11.329 7.192 - 7.223: 99.8416% ( 1) 00:18:11.329 7.375 - 7.406: 99.8477% ( 1) 00:18:11.329 7.497 - 7.528: 99.8538% ( 1) 00:18:11.329 7.924 - 7.985: 99.8599% ( 1) 00:18:11.329 8.046 - 8.107: 99.8660% ( 1) 00:18:11.329 8.168 - 8.229: 99.8843% ( 3) 00:18:11.329 8.655 - 8.716: 99.8903% ( 1) 00:18:11.329 [2024-12-14 22:27:32.118717] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:11.329 8.838 - 8.899: 99.8964% ( 1) 00:18:11.329 3261.196 - 3276.800: 99.9025% ( 1) 00:18:11.329 3994.575 - 4025.783: 99.9939% ( 15) 00:18:11.329 4150.613 - 4181.821: 100.0000% ( 1) 00:18:11.329 00:18:11.329 Complete histogram 00:18:11.329 ================== 00:18:11.329 Range in us Cumulative Count 00:18:11.329 1.722 - 1.730: 0.0305% ( 5) 00:18:11.329 1.730 - 1.737: 0.0487% ( 3) 00:18:11.329 1.737 - 1.745: 0.0853% ( 6) 00:18:11.329 1.745 - 1.752: 0.0914% ( 1) 00:18:11.329 1.752 - 1.760: 0.0975% ( 1) 00:18:11.329 1.760 - 1.768: 0.2559% ( 26) 00:18:11.329 1.768 - 1.775: 2.9912% ( 449) 00:18:11.329 1.775 - 1.783: 15.1020% ( 1988) 00:18:11.329 1.783 - 1.790: 36.3814% ( 3493) 00:18:11.329 1.790 - 1.798: 50.9168% ( 2386) 00:18:11.329 1.798 - 1.806: 56.7225% ( 953) 00:18:11.329 1.806 - 1.813: 59.2933% ( 422) 00:18:11.329 1.813 - 1.821: 62.1748% ( 473) 00:18:11.329 1.821 - 1.829: 68.5410% ( 1045) 00:18:11.329 1.829 - 1.836: 79.8964% ( 1864) 00:18:11.329 1.836 - 1.844: 88.9430% ( 1485) 00:18:11.329 1.844 - 1.851: 93.3110% ( 717) 00:18:11.329 1.851 - 1.859: 95.3031% ( 327) 00:18:11.329 1.859 - 1.867: 96.6311% ( 218) 00:18:11.329 1.867 - 1.874: 97.5327% ( 148) 00:18:11.329 1.874 - 1.882: 97.8983% ( 60) 00:18:11.329 1.882 - 1.890: 98.0810% ( 30) 00:18:11.329 1.890 - 1.897: 98.2882% ( 34) 00:18:11.329 1.897 - 1.905: 98.5318% ( 40) 00:18:11.329 1.905 - 1.912: 98.7877% ( 42) 00:18:11.329 1.912 - 1.920: 99.0375% ( 41) 00:18:11.329 1.920 - 1.928: 99.1471% ( 18) 00:18:11.329 1.928 - 1.935: 99.2446% ( 16) 00:18:11.329 1.935 - 1.943: 99.2811% ( 6) 00:18:11.329 1.943 - 1.950: 99.3055% ( 4) 00:18:11.329 1.950 - 1.966: 99.3238% ( 3) 00:18:11.329 1.966 - 1.981: 99.3299% ( 1) 00:18:11.329 2.011 - 2.027: 99.3360% ( 1) 00:18:11.329 2.027 - 2.042: 99.3482% ( 2) 00:18:11.329 2.149 - 2.164: 99.3542% ( 1) 00:18:11.329 2.240 - 2.255: 99.3603% ( 1) 00:18:11.329 2.286 - 2.301: 99.3664% ( 1) 00:18:11.329 2.301 - 2.316: 99.3725% ( 1) 00:18:11.329 3.550 - 3.566: 99.3786% ( 1) 00:18:11.329 3.733 - 3.749: 99.3847% ( 1) 00:18:11.329 3.794 - 3.810: 99.3908% ( 1) 00:18:11.329 3.825 - 3.840: 99.3969% ( 1) 00:18:11.329 3.901 - 3.931: 99.4152% ( 3) 00:18:11.329 3.931 - 3.962: 99.4213% ( 1) 00:18:11.329 4.236 - 4.267: 99.4274% ( 1) 00:18:11.329 4.602 - 4.632: 99.4334% ( 1) 00:18:11.329 4.663 - 4.693: 99.4395% ( 1) 00:18:11.329 4.968 - 4.998: 99.4456% ( 1) 00:18:11.329 4.998 - 5.029: 99.4517% ( 1) 00:18:11.329 5.090 - 5.120: 99.4578% ( 1) 00:18:11.329 5.120 - 5.150: 99.4639% ( 1) 00:18:11.329 5.242 - 5.272: 99.4700% ( 1) 00:18:11.329 5.364 - 5.394: 99.4761% ( 1) 00:18:11.329 5.394 - 5.425: 99.4822% ( 1) 00:18:11.329 5.973 - 6.004: 99.4883% ( 1) 00:18:11.329 6.827 - 6.857: 99.4944% ( 1) 00:18:11.329 7.070 - 7.101: 99.5005% ( 1) 00:18:11.329 7.162 - 7.192: 99.5065% ( 1) 00:18:11.329 8.411 - 8.472: 99.5126% ( 1) 00:18:11.329 38.766 - 39.010: 99.5187% ( 1) 00:18:11.329 3994.575 - 4025.783: 100.0000% ( 79) 00:18:11.329 00:18:11.329 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:11.329 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:11.329 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:11.329 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:11.329 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:11.607 [ 00:18:11.607 { 00:18:11.607 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:11.607 "subtype": "Discovery", 00:18:11.607 "listen_addresses": [], 00:18:11.607 "allow_any_host": true, 00:18:11.607 "hosts": [] 00:18:11.607 }, 00:18:11.607 { 00:18:11.607 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:11.607 "subtype": "NVMe", 00:18:11.607 "listen_addresses": [ 00:18:11.607 { 00:18:11.607 "trtype": "VFIOUSER", 00:18:11.607 "adrfam": "IPv4", 00:18:11.607 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:11.607 "trsvcid": "0" 00:18:11.607 } 00:18:11.607 ], 00:18:11.607 "allow_any_host": true, 00:18:11.607 "hosts": [], 00:18:11.607 "serial_number": "SPDK1", 00:18:11.607 "model_number": "SPDK bdev Controller", 00:18:11.607 "max_namespaces": 32, 00:18:11.607 "min_cntlid": 1, 00:18:11.607 "max_cntlid": 65519, 00:18:11.607 "namespaces": [ 00:18:11.607 { 00:18:11.608 "nsid": 1, 00:18:11.608 "bdev_name": "Malloc1", 00:18:11.608 "name": "Malloc1", 00:18:11.608 "nguid": "BC176D04F1D44ADFB3229A5993C39021", 00:18:11.608 "uuid": "bc176d04-f1d4-4adf-b322-9a5993c39021" 00:18:11.608 } 00:18:11.608 ] 00:18:11.608 }, 00:18:11.608 { 00:18:11.608 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:11.608 "subtype": "NVMe", 00:18:11.608 "listen_addresses": [ 00:18:11.608 { 00:18:11.608 "trtype": "VFIOUSER", 00:18:11.608 "adrfam": "IPv4", 00:18:11.608 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:11.608 "trsvcid": "0" 00:18:11.608 } 00:18:11.608 ], 00:18:11.608 "allow_any_host": true, 00:18:11.608 "hosts": [], 00:18:11.608 "serial_number": "SPDK2", 00:18:11.608 "model_number": "SPDK bdev Controller", 00:18:11.608 "max_namespaces": 32, 00:18:11.608 "min_cntlid": 1, 00:18:11.608 "max_cntlid": 65519, 00:18:11.608 "namespaces": [ 00:18:11.608 { 00:18:11.608 "nsid": 1, 00:18:11.608 "bdev_name": "Malloc2", 00:18:11.608 "name": "Malloc2", 00:18:11.608 "nguid": "D0CA3D0E4D6F4615BA64715D5A6EACA8", 00:18:11.608 "uuid": "d0ca3d0e-4d6f-4615-ba64-715d5a6eaca8" 00:18:11.608 } 00:18:11.608 ] 00:18:11.608 } 00:18:11.608 ] 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=300582 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:11.608 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:11.877 [2024-12-14 22:27:32.520319] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:11.877 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:11.877 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:11.877 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:11.877 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:11.877 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:12.152 Malloc3 00:18:12.152 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:12.152 [2024-12-14 22:27:32.978716] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:12.152 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:12.152 Asynchronous Event Request test 00:18:12.152 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:12.152 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:12.152 Registering asynchronous event callbacks... 00:18:12.152 Starting namespace attribute notice tests for all controllers... 00:18:12.152 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:12.152 aer_cb - Changed Namespace 00:18:12.152 Cleaning up... 00:18:12.443 [ 00:18:12.443 { 00:18:12.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:12.443 "subtype": "Discovery", 00:18:12.443 "listen_addresses": [], 00:18:12.443 "allow_any_host": true, 00:18:12.443 "hosts": [] 00:18:12.443 }, 00:18:12.443 { 00:18:12.443 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:12.443 "subtype": "NVMe", 00:18:12.443 "listen_addresses": [ 00:18:12.443 { 00:18:12.443 "trtype": "VFIOUSER", 00:18:12.443 "adrfam": "IPv4", 00:18:12.443 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:12.443 "trsvcid": "0" 00:18:12.443 } 00:18:12.443 ], 00:18:12.443 "allow_any_host": true, 00:18:12.443 "hosts": [], 00:18:12.443 "serial_number": "SPDK1", 00:18:12.443 "model_number": "SPDK bdev Controller", 00:18:12.443 "max_namespaces": 32, 00:18:12.443 "min_cntlid": 1, 00:18:12.443 "max_cntlid": 65519, 00:18:12.443 "namespaces": [ 00:18:12.443 { 00:18:12.443 "nsid": 1, 00:18:12.443 "bdev_name": "Malloc1", 00:18:12.443 "name": "Malloc1", 00:18:12.443 "nguid": "BC176D04F1D44ADFB3229A5993C39021", 00:18:12.443 "uuid": "bc176d04-f1d4-4adf-b322-9a5993c39021" 00:18:12.443 }, 00:18:12.443 { 00:18:12.443 "nsid": 2, 00:18:12.443 "bdev_name": "Malloc3", 00:18:12.443 "name": "Malloc3", 00:18:12.444 "nguid": "9A593C521510473781F2A09E156AC3AF", 00:18:12.444 "uuid": "9a593c52-1510-4737-81f2-a09e156ac3af" 00:18:12.444 } 00:18:12.444 ] 00:18:12.444 }, 00:18:12.444 { 00:18:12.444 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:12.444 "subtype": "NVMe", 00:18:12.444 "listen_addresses": [ 00:18:12.444 { 00:18:12.444 "trtype": "VFIOUSER", 00:18:12.444 "adrfam": "IPv4", 00:18:12.444 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:12.444 "trsvcid": "0" 00:18:12.444 } 00:18:12.444 ], 00:18:12.444 "allow_any_host": true, 00:18:12.444 "hosts": [], 00:18:12.444 "serial_number": "SPDK2", 00:18:12.444 "model_number": "SPDK bdev Controller", 00:18:12.444 "max_namespaces": 32, 00:18:12.444 "min_cntlid": 1, 00:18:12.444 "max_cntlid": 65519, 00:18:12.444 "namespaces": [ 00:18:12.444 { 00:18:12.444 "nsid": 1, 00:18:12.444 "bdev_name": "Malloc2", 00:18:12.444 "name": "Malloc2", 00:18:12.444 "nguid": "D0CA3D0E4D6F4615BA64715D5A6EACA8", 00:18:12.444 "uuid": "d0ca3d0e-4d6f-4615-ba64-715d5a6eaca8" 00:18:12.444 } 00:18:12.444 ] 00:18:12.444 } 00:18:12.444 ] 00:18:12.444 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 300582 00:18:12.444 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:12.444 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:12.444 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:12.444 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:12.444 [2024-12-14 22:27:33.223477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:12.444 [2024-12-14 22:27:33.223526] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300798 ] 00:18:12.444 [2024-12-14 22:27:33.262090] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:12.444 [2024-12-14 22:27:33.267349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:12.444 [2024-12-14 22:27:33.267369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa2fd76d000 00:18:12.444 [2024-12-14 22:27:33.268352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.269362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.270374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.271381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.272386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.273397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.274400] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.275412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:12.444 [2024-12-14 22:27:33.276423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:12.444 [2024-12-14 22:27:33.276433] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa2fc476000 00:18:12.444 [2024-12-14 22:27:33.277335] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:12.444 [2024-12-14 22:27:33.286628] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:12.444 [2024-12-14 22:27:33.286653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:12.444 [2024-12-14 22:27:33.291721] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:12.444 [2024-12-14 22:27:33.291755] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:12.444 [2024-12-14 22:27:33.291822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:12.444 [2024-12-14 22:27:33.291835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:12.444 [2024-12-14 22:27:33.291840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:12.444 [2024-12-14 22:27:33.292727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:12.444 [2024-12-14 22:27:33.292737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:12.444 [2024-12-14 22:27:33.292744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:12.444 [2024-12-14 22:27:33.293728] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:12.444 [2024-12-14 22:27:33.293736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:12.444 [2024-12-14 22:27:33.293743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.294739] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:12.444 [2024-12-14 22:27:33.294747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.295742] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:12.444 [2024-12-14 22:27:33.295750] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:12.444 [2024-12-14 22:27:33.295754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.295763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.295870] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:12.444 [2024-12-14 22:27:33.295874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.295879] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:12.444 [2024-12-14 22:27:33.296758] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:12.444 [2024-12-14 22:27:33.297759] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:12.444 [2024-12-14 22:27:33.298767] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:12.444 [2024-12-14 22:27:33.299772] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:12.444 [2024-12-14 22:27:33.299810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:12.444 [2024-12-14 22:27:33.300779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:12.444 [2024-12-14 22:27:33.300787] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:12.444 [2024-12-14 22:27:33.300791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:12.444 [2024-12-14 22:27:33.300808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:12.444 [2024-12-14 22:27:33.300817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:12.444 [2024-12-14 22:27:33.300827] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:12.444 [2024-12-14 22:27:33.300831] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:12.444 [2024-12-14 22:27:33.300834] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.444 [2024-12-14 22:27:33.300844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:12.444 [2024-12-14 22:27:33.308912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:12.444 [2024-12-14 22:27:33.308923] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:12.444 [2024-12-14 22:27:33.308928] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:12.444 [2024-12-14 22:27:33.308932] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:12.444 [2024-12-14 22:27:33.308936] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:12.444 [2024-12-14 22:27:33.308941] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:12.444 [2024-12-14 22:27:33.308945] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:12.444 [2024-12-14 22:27:33.308952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:12.444 [2024-12-14 22:27:33.308961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:12.444 [2024-12-14 22:27:33.308971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.316910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.316923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.712 [2024-12-14 22:27:33.316931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.712 [2024-12-14 22:27:33.316938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.712 [2024-12-14 22:27:33.316945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.712 [2024-12-14 22:27:33.316949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.316960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.316968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.324909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.324917] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:12.712 [2024-12-14 22:27:33.324921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.324927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.324932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.324940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.332916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.332968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.332978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.332985] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:12.712 [2024-12-14 22:27:33.332989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:12.712 [2024-12-14 22:27:33.332992] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.712 [2024-12-14 22:27:33.332998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.340909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.340925] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:12.712 [2024-12-14 22:27:33.340934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.340941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.340948] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:12.712 [2024-12-14 22:27:33.340952] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:12.712 [2024-12-14 22:27:33.340955] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.712 [2024-12-14 22:27:33.340961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.348908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.348922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.348929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.348936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:12.712 [2024-12-14 22:27:33.348940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:12.712 [2024-12-14 22:27:33.348943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.712 [2024-12-14 22:27:33.348948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.356908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.356917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356949] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:12.712 [2024-12-14 22:27:33.356953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:12.712 [2024-12-14 22:27:33.356958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:12.712 [2024-12-14 22:27:33.356973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:12.712 [2024-12-14 22:27:33.364910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:12.712 [2024-12-14 22:27:33.364926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.372910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.372922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.380909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.380921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.388908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.388924] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:12.713 [2024-12-14 22:27:33.388929] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:12.713 [2024-12-14 22:27:33.388932] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:12.713 [2024-12-14 22:27:33.388936] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:12.713 [2024-12-14 22:27:33.388939] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:12.713 [2024-12-14 22:27:33.388944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:12.713 [2024-12-14 22:27:33.388951] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:12.713 [2024-12-14 22:27:33.388955] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:12.713 [2024-12-14 22:27:33.388958] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.713 [2024-12-14 22:27:33.388964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.388970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:12.713 [2024-12-14 22:27:33.388974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:12.713 [2024-12-14 22:27:33.388977] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.713 [2024-12-14 22:27:33.388982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.388989] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:12.713 [2024-12-14 22:27:33.388994] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:12.713 [2024-12-14 22:27:33.388997] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:12.713 [2024-12-14 22:27:33.389002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:12.713 [2024-12-14 22:27:33.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.396922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.396932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:12.713 [2024-12-14 22:27:33.396938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:12.713 ===================================================== 00:18:12.713 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:12.713 ===================================================== 00:18:12.713 Controller Capabilities/Features 00:18:12.713 ================================ 00:18:12.713 Vendor ID: 4e58 00:18:12.713 Subsystem Vendor ID: 4e58 00:18:12.713 Serial Number: SPDK2 00:18:12.713 Model Number: SPDK bdev Controller 00:18:12.713 Firmware Version: 25.01 00:18:12.713 Recommended Arb Burst: 6 00:18:12.713 IEEE OUI Identifier: 8d 6b 50 00:18:12.713 Multi-path I/O 00:18:12.713 May have multiple subsystem ports: Yes 00:18:12.713 May have multiple controllers: Yes 00:18:12.713 Associated with SR-IOV VF: No 00:18:12.713 Max Data Transfer Size: 131072 00:18:12.713 Max Number of Namespaces: 32 00:18:12.713 Max Number of I/O Queues: 127 00:18:12.713 NVMe Specification Version (VS): 1.3 00:18:12.713 NVMe Specification Version (Identify): 1.3 00:18:12.713 Maximum Queue Entries: 256 00:18:12.713 Contiguous Queues Required: Yes 00:18:12.713 Arbitration Mechanisms Supported 00:18:12.713 Weighted Round Robin: Not Supported 00:18:12.713 Vendor Specific: Not Supported 00:18:12.713 Reset Timeout: 15000 ms 00:18:12.713 Doorbell Stride: 4 bytes 00:18:12.713 NVM Subsystem Reset: Not Supported 00:18:12.713 Command Sets Supported 00:18:12.713 NVM Command Set: Supported 00:18:12.713 Boot Partition: Not Supported 00:18:12.713 Memory Page Size Minimum: 4096 bytes 00:18:12.713 Memory Page Size Maximum: 4096 bytes 00:18:12.713 Persistent Memory Region: Not Supported 00:18:12.713 Optional Asynchronous Events Supported 00:18:12.713 Namespace Attribute Notices: Supported 00:18:12.713 Firmware Activation Notices: Not Supported 00:18:12.713 ANA Change Notices: Not Supported 00:18:12.713 PLE Aggregate Log Change Notices: Not Supported 00:18:12.713 LBA Status Info Alert Notices: Not Supported 00:18:12.713 EGE Aggregate Log Change Notices: Not Supported 00:18:12.713 Normal NVM Subsystem Shutdown event: Not Supported 00:18:12.713 Zone Descriptor Change Notices: Not Supported 00:18:12.713 Discovery Log Change Notices: Not Supported 00:18:12.713 Controller Attributes 00:18:12.713 128-bit Host Identifier: Supported 00:18:12.713 Non-Operational Permissive Mode: Not Supported 00:18:12.713 NVM Sets: Not Supported 00:18:12.713 Read Recovery Levels: Not Supported 00:18:12.713 Endurance Groups: Not Supported 00:18:12.713 Predictable Latency Mode: Not Supported 00:18:12.713 Traffic Based Keep ALive: Not Supported 00:18:12.713 Namespace Granularity: Not Supported 00:18:12.713 SQ Associations: Not Supported 00:18:12.713 UUID List: Not Supported 00:18:12.713 Multi-Domain Subsystem: Not Supported 00:18:12.713 Fixed Capacity Management: Not Supported 00:18:12.713 Variable Capacity Management: Not Supported 00:18:12.713 Delete Endurance Group: Not Supported 00:18:12.713 Delete NVM Set: Not Supported 00:18:12.713 Extended LBA Formats Supported: Not Supported 00:18:12.713 Flexible Data Placement Supported: Not Supported 00:18:12.713 00:18:12.713 Controller Memory Buffer Support 00:18:12.713 ================================ 00:18:12.713 Supported: No 00:18:12.713 00:18:12.713 Persistent Memory Region Support 00:18:12.713 ================================ 00:18:12.713 Supported: No 00:18:12.713 00:18:12.713 Admin Command Set Attributes 00:18:12.713 ============================ 00:18:12.713 Security Send/Receive: Not Supported 00:18:12.713 Format NVM: Not Supported 00:18:12.713 Firmware Activate/Download: Not Supported 00:18:12.713 Namespace Management: Not Supported 00:18:12.713 Device Self-Test: Not Supported 00:18:12.713 Directives: Not Supported 00:18:12.713 NVMe-MI: Not Supported 00:18:12.713 Virtualization Management: Not Supported 00:18:12.713 Doorbell Buffer Config: Not Supported 00:18:12.713 Get LBA Status Capability: Not Supported 00:18:12.713 Command & Feature Lockdown Capability: Not Supported 00:18:12.713 Abort Command Limit: 4 00:18:12.713 Async Event Request Limit: 4 00:18:12.713 Number of Firmware Slots: N/A 00:18:12.713 Firmware Slot 1 Read-Only: N/A 00:18:12.713 Firmware Activation Without Reset: N/A 00:18:12.713 Multiple Update Detection Support: N/A 00:18:12.713 Firmware Update Granularity: No Information Provided 00:18:12.713 Per-Namespace SMART Log: No 00:18:12.713 Asymmetric Namespace Access Log Page: Not Supported 00:18:12.713 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:12.713 Command Effects Log Page: Supported 00:18:12.713 Get Log Page Extended Data: Supported 00:18:12.713 Telemetry Log Pages: Not Supported 00:18:12.713 Persistent Event Log Pages: Not Supported 00:18:12.713 Supported Log Pages Log Page: May Support 00:18:12.713 Commands Supported & Effects Log Page: Not Supported 00:18:12.713 Feature Identifiers & Effects Log Page:May Support 00:18:12.713 NVMe-MI Commands & Effects Log Page: May Support 00:18:12.713 Data Area 4 for Telemetry Log: Not Supported 00:18:12.713 Error Log Page Entries Supported: 128 00:18:12.713 Keep Alive: Supported 00:18:12.713 Keep Alive Granularity: 10000 ms 00:18:12.713 00:18:12.713 NVM Command Set Attributes 00:18:12.713 ========================== 00:18:12.713 Submission Queue Entry Size 00:18:12.713 Max: 64 00:18:12.713 Min: 64 00:18:12.713 Completion Queue Entry Size 00:18:12.713 Max: 16 00:18:12.713 Min: 16 00:18:12.713 Number of Namespaces: 32 00:18:12.713 Compare Command: Supported 00:18:12.713 Write Uncorrectable Command: Not Supported 00:18:12.713 Dataset Management Command: Supported 00:18:12.713 Write Zeroes Command: Supported 00:18:12.713 Set Features Save Field: Not Supported 00:18:12.713 Reservations: Not Supported 00:18:12.713 Timestamp: Not Supported 00:18:12.713 Copy: Supported 00:18:12.713 Volatile Write Cache: Present 00:18:12.714 Atomic Write Unit (Normal): 1 00:18:12.714 Atomic Write Unit (PFail): 1 00:18:12.714 Atomic Compare & Write Unit: 1 00:18:12.714 Fused Compare & Write: Supported 00:18:12.714 Scatter-Gather List 00:18:12.714 SGL Command Set: Supported (Dword aligned) 00:18:12.714 SGL Keyed: Not Supported 00:18:12.714 SGL Bit Bucket Descriptor: Not Supported 00:18:12.714 SGL Metadata Pointer: Not Supported 00:18:12.714 Oversized SGL: Not Supported 00:18:12.714 SGL Metadata Address: Not Supported 00:18:12.714 SGL Offset: Not Supported 00:18:12.714 Transport SGL Data Block: Not Supported 00:18:12.714 Replay Protected Memory Block: Not Supported 00:18:12.714 00:18:12.714 Firmware Slot Information 00:18:12.714 ========================= 00:18:12.714 Active slot: 1 00:18:12.714 Slot 1 Firmware Revision: 25.01 00:18:12.714 00:18:12.714 00:18:12.714 Commands Supported and Effects 00:18:12.714 ============================== 00:18:12.714 Admin Commands 00:18:12.714 -------------- 00:18:12.714 Get Log Page (02h): Supported 00:18:12.714 Identify (06h): Supported 00:18:12.714 Abort (08h): Supported 00:18:12.714 Set Features (09h): Supported 00:18:12.714 Get Features (0Ah): Supported 00:18:12.714 Asynchronous Event Request (0Ch): Supported 00:18:12.714 Keep Alive (18h): Supported 00:18:12.714 I/O Commands 00:18:12.714 ------------ 00:18:12.714 Flush (00h): Supported LBA-Change 00:18:12.714 Write (01h): Supported LBA-Change 00:18:12.714 Read (02h): Supported 00:18:12.714 Compare (05h): Supported 00:18:12.714 Write Zeroes (08h): Supported LBA-Change 00:18:12.714 Dataset Management (09h): Supported LBA-Change 00:18:12.714 Copy (19h): Supported LBA-Change 00:18:12.714 00:18:12.714 Error Log 00:18:12.714 ========= 00:18:12.714 00:18:12.714 Arbitration 00:18:12.714 =========== 00:18:12.714 Arbitration Burst: 1 00:18:12.714 00:18:12.714 Power Management 00:18:12.714 ================ 00:18:12.714 Number of Power States: 1 00:18:12.714 Current Power State: Power State #0 00:18:12.714 Power State #0: 00:18:12.714 Max Power: 0.00 W 00:18:12.714 Non-Operational State: Operational 00:18:12.714 Entry Latency: Not Reported 00:18:12.714 Exit Latency: Not Reported 00:18:12.714 Relative Read Throughput: 0 00:18:12.714 Relative Read Latency: 0 00:18:12.714 Relative Write Throughput: 0 00:18:12.714 Relative Write Latency: 0 00:18:12.714 Idle Power: Not Reported 00:18:12.714 Active Power: Not Reported 00:18:12.714 Non-Operational Permissive Mode: Not Supported 00:18:12.714 00:18:12.714 Health Information 00:18:12.714 ================== 00:18:12.714 Critical Warnings: 00:18:12.714 Available Spare Space: OK 00:18:12.714 Temperature: OK 00:18:12.714 Device Reliability: OK 00:18:12.714 Read Only: No 00:18:12.714 Volatile Memory Backup: OK 00:18:12.714 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:12.714 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:12.714 Available Spare: 0% 00:18:12.714 Available Sp[2024-12-14 22:27:33.397032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:12.714 [2024-12-14 22:27:33.404910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:12.714 [2024-12-14 22:27:33.404937] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:12.714 [2024-12-14 22:27:33.404946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.714 [2024-12-14 22:27:33.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.714 [2024-12-14 22:27:33.404957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.714 [2024-12-14 22:27:33.404963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.714 [2024-12-14 22:27:33.405019] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:12.714 [2024-12-14 22:27:33.405030] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:12.714 [2024-12-14 22:27:33.406021] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:12.714 [2024-12-14 22:27:33.406064] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:12.714 [2024-12-14 22:27:33.406070] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:12.714 [2024-12-14 22:27:33.407020] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:12.714 [2024-12-14 22:27:33.407031] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:12.714 [2024-12-14 22:27:33.407081] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:12.714 [2024-12-14 22:27:33.408038] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:12.714 are Threshold: 0% 00:18:12.714 Life Percentage Used: 0% 00:18:12.714 Data Units Read: 0 00:18:12.714 Data Units Written: 0 00:18:12.714 Host Read Commands: 0 00:18:12.714 Host Write Commands: 0 00:18:12.714 Controller Busy Time: 0 minutes 00:18:12.714 Power Cycles: 0 00:18:12.714 Power On Hours: 0 hours 00:18:12.714 Unsafe Shutdowns: 0 00:18:12.714 Unrecoverable Media Errors: 0 00:18:12.714 Lifetime Error Log Entries: 0 00:18:12.714 Warning Temperature Time: 0 minutes 00:18:12.714 Critical Temperature Time: 0 minutes 00:18:12.714 00:18:12.714 Number of Queues 00:18:12.714 ================ 00:18:12.714 Number of I/O Submission Queues: 127 00:18:12.714 Number of I/O Completion Queues: 127 00:18:12.714 00:18:12.714 Active Namespaces 00:18:12.714 ================= 00:18:12.714 Namespace ID:1 00:18:12.714 Error Recovery Timeout: Unlimited 00:18:12.714 Command Set Identifier: NVM (00h) 00:18:12.714 Deallocate: Supported 00:18:12.714 Deallocated/Unwritten Error: Not Supported 00:18:12.714 Deallocated Read Value: Unknown 00:18:12.714 Deallocate in Write Zeroes: Not Supported 00:18:12.714 Deallocated Guard Field: 0xFFFF 00:18:12.714 Flush: Supported 00:18:12.714 Reservation: Supported 00:18:12.714 Namespace Sharing Capabilities: Multiple Controllers 00:18:12.714 Size (in LBAs): 131072 (0GiB) 00:18:12.714 Capacity (in LBAs): 131072 (0GiB) 00:18:12.714 Utilization (in LBAs): 131072 (0GiB) 00:18:12.714 NGUID: D0CA3D0E4D6F4615BA64715D5A6EACA8 00:18:12.714 UUID: d0ca3d0e-4d6f-4615-ba64-715d5a6eaca8 00:18:12.714 Thin Provisioning: Not Supported 00:18:12.714 Per-NS Atomic Units: Yes 00:18:12.714 Atomic Boundary Size (Normal): 0 00:18:12.714 Atomic Boundary Size (PFail): 0 00:18:12.714 Atomic Boundary Offset: 0 00:18:12.714 Maximum Single Source Range Length: 65535 00:18:12.714 Maximum Copy Length: 65535 00:18:12.714 Maximum Source Range Count: 1 00:18:12.714 NGUID/EUI64 Never Reused: No 00:18:12.714 Namespace Write Protected: No 00:18:12.714 Number of LBA Formats: 1 00:18:12.714 Current LBA Format: LBA Format #00 00:18:12.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:12.714 00:18:12.714 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:12.982 [2024-12-14 22:27:33.639035] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:18.253 Initializing NVMe Controllers 00:18:18.253 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:18.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:18.253 Initialization complete. Launching workers. 00:18:18.253 ======================================================== 00:18:18.253 Latency(us) 00:18:18.253 Device Information : IOPS MiB/s Average min max 00:18:18.253 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.60 156.05 3204.14 961.92 10607.78 00:18:18.253 ======================================================== 00:18:18.253 Total : 39949.60 156.05 3204.14 961.92 10607.78 00:18:18.253 00:18:18.253 [2024-12-14 22:27:38.740159] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.253 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:18.253 [2024-12-14 22:27:38.975858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:23.526 Initializing NVMe Controllers 00:18:23.526 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:23.526 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:23.526 Initialization complete. Launching workers. 00:18:23.526 ======================================================== 00:18:23.526 Latency(us) 00:18:23.526 Device Information : IOPS MiB/s Average min max 00:18:23.526 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39965.80 156.12 3204.02 953.28 7116.96 00:18:23.526 ======================================================== 00:18:23.526 Total : 39965.80 156.12 3204.02 953.28 7116.96 00:18:23.526 00:18:23.526 [2024-12-14 22:27:43.997241] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:23.526 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:23.526 [2024-12-14 22:27:44.199410] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:28.798 [2024-12-14 22:27:49.335998] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:28.798 Initializing NVMe Controllers 00:18:28.798 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:28.798 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:28.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:28.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:28.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:28.798 Initialization complete. Launching workers. 00:18:28.798 Starting thread on core 2 00:18:28.798 Starting thread on core 3 00:18:28.798 Starting thread on core 1 00:18:28.798 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:28.798 [2024-12-14 22:27:49.625329] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.093 [2024-12-14 22:27:52.678539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.093 Initializing NVMe Controllers 00:18:32.093 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.093 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:32.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:32.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:32.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:32.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:32.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:32.093 Initialization complete. Launching workers. 00:18:32.093 Starting thread on core 1 with urgent priority queue 00:18:32.093 Starting thread on core 2 with urgent priority queue 00:18:32.093 Starting thread on core 3 with urgent priority queue 00:18:32.093 Starting thread on core 0 with urgent priority queue 00:18:32.093 SPDK bdev Controller (SPDK2 ) core 0: 8456.00 IO/s 11.83 secs/100000 ios 00:18:32.093 SPDK bdev Controller (SPDK2 ) core 1: 7483.00 IO/s 13.36 secs/100000 ios 00:18:32.093 SPDK bdev Controller (SPDK2 ) core 2: 10075.00 IO/s 9.93 secs/100000 ios 00:18:32.093 SPDK bdev Controller (SPDK2 ) core 3: 6690.00 IO/s 14.95 secs/100000 ios 00:18:32.093 ======================================================== 00:18:32.093 00:18:32.093 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:32.093 [2024-12-14 22:27:52.964362] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.093 Initializing NVMe Controllers 00:18:32.093 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.093 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.093 Namespace ID: 1 size: 0GB 00:18:32.093 Initialization complete. 00:18:32.093 INFO: using host memory buffer for IO 00:18:32.093 Hello world! 00:18:32.093 [2024-12-14 22:27:52.974415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.352 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:32.611 [2024-12-14 22:27:53.253211] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.548 Initializing NVMe Controllers 00:18:33.548 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:33.548 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:33.548 Initialization complete. Launching workers. 00:18:33.548 submit (in ns) avg, min, max = 6504.8, 3187.6, 3999275.2 00:18:33.548 complete (in ns) avg, min, max = 18867.5, 1762.9, 4174161.9 00:18:33.548 00:18:33.548 Submit histogram 00:18:33.548 ================ 00:18:33.548 Range in us Cumulative Count 00:18:33.548 3.185 - 3.200: 0.1872% ( 31) 00:18:33.548 3.200 - 3.215: 1.5278% ( 222) 00:18:33.548 3.215 - 3.230: 5.7246% ( 695) 00:18:33.548 3.230 - 3.246: 11.4070% ( 941) 00:18:33.548 3.246 - 3.261: 17.8382% ( 1065) 00:18:33.548 3.261 - 3.276: 25.7971% ( 1318) 00:18:33.548 3.276 - 3.291: 33.4601% ( 1269) 00:18:33.548 3.291 - 3.307: 39.1546% ( 943) 00:18:33.548 3.307 - 3.322: 44.0036% ( 803) 00:18:33.548 3.322 - 3.337: 49.0157% ( 830) 00:18:33.548 3.337 - 3.352: 53.2428% ( 700) 00:18:33.548 3.352 - 3.368: 57.3671% ( 683) 00:18:33.548 3.368 - 3.383: 64.3357% ( 1154) 00:18:33.548 3.383 - 3.398: 70.6280% ( 1042) 00:18:33.548 3.398 - 3.413: 75.9420% ( 880) 00:18:33.548 3.413 - 3.429: 80.9783% ( 834) 00:18:33.548 3.429 - 3.444: 84.1304% ( 522) 00:18:33.548 3.444 - 3.459: 86.0930% ( 325) 00:18:33.548 3.459 - 3.474: 87.2766% ( 196) 00:18:33.548 3.474 - 3.490: 87.8321% ( 92) 00:18:33.548 3.490 - 3.505: 88.1703% ( 56) 00:18:33.548 3.505 - 3.520: 88.7681% ( 99) 00:18:33.548 3.520 - 3.535: 89.4807% ( 118) 00:18:33.548 3.535 - 3.550: 90.1570% ( 112) 00:18:33.548 3.550 - 3.566: 91.3345% ( 195) 00:18:33.548 3.566 - 3.581: 92.2645% ( 154) 00:18:33.548 3.581 - 3.596: 93.0374% ( 128) 00:18:33.548 3.596 - 3.611: 93.8225% ( 130) 00:18:33.548 3.611 - 3.627: 94.6860% ( 143) 00:18:33.548 3.627 - 3.642: 95.6944% ( 167) 00:18:33.548 3.642 - 3.657: 96.4614% ( 127) 00:18:33.548 3.657 - 3.672: 97.2947% ( 138) 00:18:33.548 3.672 - 3.688: 97.9227% ( 104) 00:18:33.548 3.688 - 3.703: 98.3213% ( 66) 00:18:33.548 3.703 - 3.718: 98.6353% ( 52) 00:18:33.548 3.718 - 3.733: 99.0097% ( 62) 00:18:33.548 3.733 - 3.749: 99.2331% ( 37) 00:18:33.548 3.749 - 3.764: 99.4203% ( 31) 00:18:33.548 3.764 - 3.779: 99.5350% ( 19) 00:18:33.549 3.779 - 3.794: 99.6014% ( 11) 00:18:33.549 3.794 - 3.810: 99.6437% ( 7) 00:18:33.549 3.810 - 3.825: 99.6558% ( 2) 00:18:33.549 3.825 - 3.840: 99.6860% ( 5) 00:18:33.549 4.053 - 4.084: 99.6920% ( 1) 00:18:33.549 5.150 - 5.181: 99.6981% ( 1) 00:18:33.549 5.211 - 5.242: 99.7041% ( 1) 00:18:33.549 5.303 - 5.333: 99.7162% ( 2) 00:18:33.549 5.333 - 5.364: 99.7222% ( 1) 00:18:33.549 5.425 - 5.455: 99.7283% ( 1) 00:18:33.549 5.516 - 5.547: 99.7343% ( 1) 00:18:33.549 5.547 - 5.577: 99.7403% ( 1) 00:18:33.549 5.790 - 5.821: 99.7464% ( 1) 00:18:33.549 5.821 - 5.851: 99.7524% ( 1) 00:18:33.549 6.065 - 6.095: 99.7585% ( 1) 00:18:33.549 6.126 - 6.156: 99.7645% ( 1) 00:18:33.549 6.156 - 6.187: 99.7705% ( 1) 00:18:33.549 6.248 - 6.278: 99.7766% ( 1) 00:18:33.549 6.278 - 6.309: 99.7826% ( 1) 00:18:33.549 6.430 - 6.461: 99.7947% ( 2) 00:18:33.549 6.461 - 6.491: 99.8007% ( 1) 00:18:33.549 6.491 - 6.522: 99.8128% ( 2) 00:18:33.549 6.522 - 6.552: 99.8188% ( 1) 00:18:33.549 6.613 - 6.644: 99.8309% ( 2) 00:18:33.549 6.949 - 6.979: 99.8430% ( 2) 00:18:33.549 6.979 - 7.010: 99.8490% ( 1) 00:18:33.549 7.040 - 7.070: 99.8551% ( 1) 00:18:33.549 7.070 - 7.101: 99.8611% ( 1) 00:18:33.549 7.131 - 7.162: 99.8671% ( 1) 00:18:33.549 7.162 - 7.192: 99.8732% ( 1) 00:18:33.549 7.192 - 7.223: 99.8792% ( 1) 00:18:33.549 7.314 - 7.345: 99.8853% ( 1) 00:18:33.549 7.741 - 7.771: 99.8913% ( 1) 00:18:33.549 7.771 - 7.802: 99.8973% ( 1) 00:18:33.549 8.046 - 8.107: 99.9034% ( 1) 00:18:33.549 8.594 - 8.655: 99.9094% ( 1) 00:18:33.549 8.838 - 8.899: 99.9155% ( 1) 00:18:33.549 13.288 - 13.349: 99.9215% ( 1) 00:18:33.549 3994.575 - 4025.783: 100.0000% ( 13) 00:18:33.549 00:18:33.549 Complete histogram 00:18:33.549 ================== 00:18:33.549 Ra[2024-12-14 22:27:54.357919] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:33.549 nge in us Cumulative Count 00:18:33.549 1.760 - 1.768: 0.0725% ( 12) 00:18:33.549 1.768 - 1.775: 1.4070% ( 221) 00:18:33.549 1.775 - 1.783: 8.7017% ( 1208) 00:18:33.549 1.783 - 1.790: 24.1304% ( 2555) 00:18:33.549 1.790 - 1.798: 36.9143% ( 2117) 00:18:33.549 1.798 - 1.806: 42.5664% ( 936) 00:18:33.549 1.806 - 1.813: 46.2258% ( 606) 00:18:33.549 1.813 - 1.821: 53.3575% ( 1181) 00:18:33.549 1.821 - 1.829: 68.3092% ( 2476) 00:18:33.549 1.829 - 1.836: 82.8684% ( 2411) 00:18:33.549 1.836 - 1.844: 90.6099% ( 1282) 00:18:33.549 1.844 - 1.851: 93.8043% ( 529) 00:18:33.549 1.851 - 1.859: 95.5133% ( 283) 00:18:33.549 1.859 - 1.867: 96.7210% ( 200) 00:18:33.549 1.867 - 1.874: 97.3973% ( 112) 00:18:33.549 1.874 - 1.882: 97.6630% ( 44) 00:18:33.549 1.882 - 1.890: 97.8986% ( 39) 00:18:33.549 1.890 - 1.897: 98.1824% ( 47) 00:18:33.549 1.897 - 1.905: 98.6051% ( 70) 00:18:33.549 1.905 - 1.912: 98.9614% ( 59) 00:18:33.549 1.912 - 1.920: 99.1667% ( 34) 00:18:33.549 1.920 - 1.928: 99.2633% ( 16) 00:18:33.549 1.928 - 1.935: 99.2874% ( 4) 00:18:33.549 1.935 - 1.943: 99.3237% ( 6) 00:18:33.549 1.943 - 1.950: 99.3297% ( 1) 00:18:33.549 1.950 - 1.966: 99.3478% ( 3) 00:18:33.549 1.966 - 1.981: 99.3599% ( 2) 00:18:33.549 2.011 - 2.027: 99.3659% ( 1) 00:18:33.549 2.042 - 2.057: 99.3720% ( 1) 00:18:33.549 2.057 - 2.072: 99.3780% ( 1) 00:18:33.549 3.657 - 3.672: 99.3841% ( 1) 00:18:33.549 3.688 - 3.703: 99.3901% ( 1) 00:18:33.549 3.992 - 4.023: 99.3961% ( 1) 00:18:33.549 4.114 - 4.145: 99.4022% ( 1) 00:18:33.549 4.206 - 4.236: 99.4082% ( 1) 00:18:33.549 4.358 - 4.389: 99.4143% ( 1) 00:18:33.549 4.632 - 4.663: 99.4203% ( 1) 00:18:33.549 4.907 - 4.937: 99.4263% ( 1) 00:18:33.549 4.998 - 5.029: 99.4384% ( 2) 00:18:33.549 5.029 - 5.059: 99.4444% ( 1) 00:18:33.549 5.059 - 5.090: 99.4565% ( 2) 00:18:33.549 5.150 - 5.181: 99.4626% ( 1) 00:18:33.549 5.211 - 5.242: 99.4686% ( 1) 00:18:33.549 5.303 - 5.333: 99.4746% ( 1) 00:18:33.549 5.455 - 5.486: 99.4807% ( 1) 00:18:33.549 5.486 - 5.516: 99.4867% ( 1) 00:18:33.549 5.790 - 5.821: 99.4928% ( 1) 00:18:33.549 5.851 - 5.882: 99.4988% ( 1) 00:18:33.549 5.912 - 5.943: 99.5048% ( 1) 00:18:33.549 5.973 - 6.004: 99.5109% ( 1) 00:18:33.549 6.004 - 6.034: 99.5229% ( 2) 00:18:33.549 6.034 - 6.065: 99.5290% ( 1) 00:18:33.549 6.187 - 6.217: 99.5350% ( 1) 00:18:33.549 6.309 - 6.339: 99.5411% ( 1) 00:18:33.549 6.674 - 6.705: 99.5471% ( 1) 00:18:33.549 6.766 - 6.796: 99.5531% ( 1) 00:18:33.549 9.570 - 9.630: 99.5592% ( 1) 00:18:33.549 32.670 - 32.914: 99.5652% ( 1) 00:18:33.549 38.522 - 38.766: 99.5713% ( 1) 00:18:33.549 3198.781 - 3214.385: 99.5773% ( 1) 00:18:33.549 3229.989 - 3245.592: 99.5833% ( 1) 00:18:33.549 3994.575 - 4025.783: 99.9940% ( 68) 00:18:33.549 4150.613 - 4181.821: 100.0000% ( 1) 00:18:33.549 00:18:33.549 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:33.549 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:33.549 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:33.549 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:33.549 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:33.808 [ 00:18:33.808 { 00:18:33.808 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:33.808 "subtype": "Discovery", 00:18:33.808 "listen_addresses": [], 00:18:33.808 "allow_any_host": true, 00:18:33.808 "hosts": [] 00:18:33.808 }, 00:18:33.808 { 00:18:33.808 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:33.808 "subtype": "NVMe", 00:18:33.808 "listen_addresses": [ 00:18:33.808 { 00:18:33.808 "trtype": "VFIOUSER", 00:18:33.808 "adrfam": "IPv4", 00:18:33.808 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:33.808 "trsvcid": "0" 00:18:33.808 } 00:18:33.808 ], 00:18:33.808 "allow_any_host": true, 00:18:33.808 "hosts": [], 00:18:33.808 "serial_number": "SPDK1", 00:18:33.808 "model_number": "SPDK bdev Controller", 00:18:33.808 "max_namespaces": 32, 00:18:33.808 "min_cntlid": 1, 00:18:33.808 "max_cntlid": 65519, 00:18:33.808 "namespaces": [ 00:18:33.808 { 00:18:33.808 "nsid": 1, 00:18:33.808 "bdev_name": "Malloc1", 00:18:33.808 "name": "Malloc1", 00:18:33.808 "nguid": "BC176D04F1D44ADFB3229A5993C39021", 00:18:33.808 "uuid": "bc176d04-f1d4-4adf-b322-9a5993c39021" 00:18:33.808 }, 00:18:33.808 { 00:18:33.808 "nsid": 2, 00:18:33.808 "bdev_name": "Malloc3", 00:18:33.808 "name": "Malloc3", 00:18:33.808 "nguid": "9A593C521510473781F2A09E156AC3AF", 00:18:33.808 "uuid": "9a593c52-1510-4737-81f2-a09e156ac3af" 00:18:33.808 } 00:18:33.808 ] 00:18:33.808 }, 00:18:33.808 { 00:18:33.808 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:33.808 "subtype": "NVMe", 00:18:33.808 "listen_addresses": [ 00:18:33.808 { 00:18:33.808 "trtype": "VFIOUSER", 00:18:33.808 "adrfam": "IPv4", 00:18:33.808 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:33.808 "trsvcid": "0" 00:18:33.808 } 00:18:33.808 ], 00:18:33.808 "allow_any_host": true, 00:18:33.808 "hosts": [], 00:18:33.808 "serial_number": "SPDK2", 00:18:33.808 "model_number": "SPDK bdev Controller", 00:18:33.808 "max_namespaces": 32, 00:18:33.808 "min_cntlid": 1, 00:18:33.808 "max_cntlid": 65519, 00:18:33.808 "namespaces": [ 00:18:33.808 { 00:18:33.808 "nsid": 1, 00:18:33.808 "bdev_name": "Malloc2", 00:18:33.808 "name": "Malloc2", 00:18:33.808 "nguid": "D0CA3D0E4D6F4615BA64715D5A6EACA8", 00:18:33.808 "uuid": "d0ca3d0e-4d6f-4615-ba64-715d5a6eaca8" 00:18:33.808 } 00:18:33.808 ] 00:18:33.808 } 00:18:33.808 ] 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=304273 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:33.808 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:34.068 [2024-12-14 22:27:54.765609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:34.068 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:34.326 Malloc4 00:18:34.326 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:34.326 [2024-12-14 22:27:55.198814] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:34.585 Asynchronous Event Request test 00:18:34.585 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:34.585 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:34.585 Registering asynchronous event callbacks... 00:18:34.585 Starting namespace attribute notice tests for all controllers... 00:18:34.585 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:34.585 aer_cb - Changed Namespace 00:18:34.585 Cleaning up... 00:18:34.585 [ 00:18:34.585 { 00:18:34.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:34.585 "subtype": "Discovery", 00:18:34.585 "listen_addresses": [], 00:18:34.585 "allow_any_host": true, 00:18:34.585 "hosts": [] 00:18:34.585 }, 00:18:34.585 { 00:18:34.585 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:34.585 "subtype": "NVMe", 00:18:34.585 "listen_addresses": [ 00:18:34.585 { 00:18:34.585 "trtype": "VFIOUSER", 00:18:34.585 "adrfam": "IPv4", 00:18:34.585 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:34.585 "trsvcid": "0" 00:18:34.585 } 00:18:34.585 ], 00:18:34.585 "allow_any_host": true, 00:18:34.585 "hosts": [], 00:18:34.585 "serial_number": "SPDK1", 00:18:34.585 "model_number": "SPDK bdev Controller", 00:18:34.585 "max_namespaces": 32, 00:18:34.585 "min_cntlid": 1, 00:18:34.585 "max_cntlid": 65519, 00:18:34.585 "namespaces": [ 00:18:34.585 { 00:18:34.585 "nsid": 1, 00:18:34.585 "bdev_name": "Malloc1", 00:18:34.585 "name": "Malloc1", 00:18:34.585 "nguid": "BC176D04F1D44ADFB3229A5993C39021", 00:18:34.585 "uuid": "bc176d04-f1d4-4adf-b322-9a5993c39021" 00:18:34.585 }, 00:18:34.585 { 00:18:34.585 "nsid": 2, 00:18:34.585 "bdev_name": "Malloc3", 00:18:34.585 "name": "Malloc3", 00:18:34.585 "nguid": "9A593C521510473781F2A09E156AC3AF", 00:18:34.585 "uuid": "9a593c52-1510-4737-81f2-a09e156ac3af" 00:18:34.585 } 00:18:34.585 ] 00:18:34.585 }, 00:18:34.585 { 00:18:34.585 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:34.585 "subtype": "NVMe", 00:18:34.585 "listen_addresses": [ 00:18:34.585 { 00:18:34.585 "trtype": "VFIOUSER", 00:18:34.585 "adrfam": "IPv4", 00:18:34.585 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:34.585 "trsvcid": "0" 00:18:34.585 } 00:18:34.585 ], 00:18:34.585 "allow_any_host": true, 00:18:34.585 "hosts": [], 00:18:34.585 "serial_number": "SPDK2", 00:18:34.585 "model_number": "SPDK bdev Controller", 00:18:34.585 "max_namespaces": 32, 00:18:34.585 "min_cntlid": 1, 00:18:34.585 "max_cntlid": 65519, 00:18:34.585 "namespaces": [ 00:18:34.585 { 00:18:34.585 "nsid": 1, 00:18:34.585 "bdev_name": "Malloc2", 00:18:34.585 "name": "Malloc2", 00:18:34.585 "nguid": "D0CA3D0E4D6F4615BA64715D5A6EACA8", 00:18:34.585 "uuid": "d0ca3d0e-4d6f-4615-ba64-715d5a6eaca8" 00:18:34.585 }, 00:18:34.585 { 00:18:34.585 "nsid": 2, 00:18:34.585 "bdev_name": "Malloc4", 00:18:34.585 "name": "Malloc4", 00:18:34.585 "nguid": "92FDB769BC9B41C681227E15F111546E", 00:18:34.585 "uuid": "92fdb769-bc9b-41c6-8122-7e15f111546e" 00:18:34.585 } 00:18:34.585 ] 00:18:34.585 } 00:18:34.585 ] 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 304273 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296684 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296684 ']' 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296684 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.585 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296684 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296684' 00:18:34.845 killing process with pid 296684 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296684 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296684 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304394 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304394' 00:18:34.845 Process pid: 304394 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304394 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 304394 ']' 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.845 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:35.105 [2024-12-14 22:27:55.755936] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:35.105 [2024-12-14 22:27:55.756794] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:35.105 [2024-12-14 22:27:55.756835] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.105 [2024-12-14 22:27:55.825985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.105 [2024-12-14 22:27:55.846409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.105 [2024-12-14 22:27:55.846446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.105 [2024-12-14 22:27:55.846452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.105 [2024-12-14 22:27:55.846458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.105 [2024-12-14 22:27:55.846463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.105 [2024-12-14 22:27:55.847776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.105 [2024-12-14 22:27:55.847814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.105 [2024-12-14 22:27:55.847937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.105 [2024-12-14 22:27:55.847937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.105 [2024-12-14 22:27:55.911050] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:35.105 [2024-12-14 22:27:55.911797] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:35.105 [2024-12-14 22:27:55.912095] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:35.105 [2024-12-14 22:27:55.912555] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:35.105 [2024-12-14 22:27:55.912584] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:35.105 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.105 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:35.105 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:36.482 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:36.482 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:36.482 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:36.482 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.482 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:36.482 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:36.482 Malloc1 00:18:36.741 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:36.741 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:37.000 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:37.258 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.258 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:37.258 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:37.517 Malloc2 00:18:37.517 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:37.517 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:37.776 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304394 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 304394 ']' 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 304394 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304394 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304394' 00:18:38.035 killing process with pid 304394 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 304394 00:18:38.035 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 304394 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:38.295 00:18:38.295 real 0m51.183s 00:18:38.295 user 3m18.292s 00:18:38.295 sys 0m3.138s 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:38.295 ************************************ 00:18:38.295 END TEST nvmf_vfio_user 00:18:38.295 ************************************ 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.295 ************************************ 00:18:38.295 START TEST nvmf_vfio_user_nvme_compliance 00:18:38.295 ************************************ 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:38.295 * Looking for test storage... 00:18:38.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:38.295 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.556 --rc genhtml_branch_coverage=1 00:18:38.556 --rc genhtml_function_coverage=1 00:18:38.556 --rc genhtml_legend=1 00:18:38.556 --rc geninfo_all_blocks=1 00:18:38.556 --rc geninfo_unexecuted_blocks=1 00:18:38.556 00:18:38.556 ' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.556 --rc genhtml_branch_coverage=1 00:18:38.556 --rc genhtml_function_coverage=1 00:18:38.556 --rc genhtml_legend=1 00:18:38.556 --rc geninfo_all_blocks=1 00:18:38.556 --rc geninfo_unexecuted_blocks=1 00:18:38.556 00:18:38.556 ' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.556 --rc genhtml_branch_coverage=1 00:18:38.556 --rc genhtml_function_coverage=1 00:18:38.556 --rc genhtml_legend=1 00:18:38.556 --rc geninfo_all_blocks=1 00:18:38.556 --rc geninfo_unexecuted_blocks=1 00:18:38.556 00:18:38.556 ' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.556 --rc genhtml_branch_coverage=1 00:18:38.556 --rc genhtml_function_coverage=1 00:18:38.556 --rc genhtml_legend=1 00:18:38.556 --rc geninfo_all_blocks=1 00:18:38.556 --rc geninfo_unexecuted_blocks=1 00:18:38.556 00:18:38.556 ' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.556 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=305140 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 305140' 00:18:38.557 Process pid: 305140 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 305140 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 305140 ']' 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.557 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.557 [2024-12-14 22:27:59.339081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:38.557 [2024-12-14 22:27:59.339130] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.557 [2024-12-14 22:27:59.413358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.557 [2024-12-14 22:27:59.434946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.557 [2024-12-14 22:27:59.434983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.557 [2024-12-14 22:27:59.434991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.557 [2024-12-14 22:27:59.434998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.557 [2024-12-14 22:27:59.435003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.557 [2024-12-14 22:27:59.436315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.557 [2024-12-14 22:27:59.436352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.557 [2024-12-14 22:27:59.436352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.817 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.817 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:38.817 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.754 malloc0 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.754 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:40.013 00:18:40.013 00:18:40.013 CUnit - A unit testing framework for C - Version 2.1-3 00:18:40.013 http://cunit.sourceforge.net/ 00:18:40.013 00:18:40.013 00:18:40.013 Suite: nvme_compliance 00:18:40.013 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-14 22:28:00.780378] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.013 [2024-12-14 22:28:00.781717] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:40.013 [2024-12-14 22:28:00.781732] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:40.013 [2024-12-14 22:28:00.781739] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:40.013 [2024-12-14 22:28:00.783398] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.013 passed 00:18:40.013 Test: admin_identify_ctrlr_verify_fused ...[2024-12-14 22:28:00.863939] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.013 [2024-12-14 22:28:00.866966] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.013 passed 00:18:40.272 Test: admin_identify_ns ...[2024-12-14 22:28:00.945422] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.272 [2024-12-14 22:28:01.005916] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:40.272 [2024-12-14 22:28:01.013925] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:40.272 [2024-12-14 22:28:01.034996] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.272 passed 00:18:40.272 Test: admin_get_features_mandatory_features ...[2024-12-14 22:28:01.108740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.272 [2024-12-14 22:28:01.111760] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.272 passed 00:18:40.531 Test: admin_get_features_optional_features ...[2024-12-14 22:28:01.188301] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.531 [2024-12-14 22:28:01.191321] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.531 passed 00:18:40.531 Test: admin_set_features_number_of_queues ...[2024-12-14 22:28:01.266020] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.531 [2024-12-14 22:28:01.375988] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.531 passed 00:18:40.789 Test: admin_get_log_page_mandatory_logs ...[2024-12-14 22:28:01.448769] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.789 [2024-12-14 22:28:01.453801] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.789 passed 00:18:40.790 Test: admin_get_log_page_with_lpo ...[2024-12-14 22:28:01.525032] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.790 [2024-12-14 22:28:01.593916] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:40.790 [2024-12-14 22:28:01.606971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.790 passed 00:18:41.048 Test: fabric_property_get ...[2024-12-14 22:28:01.682595] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.048 [2024-12-14 22:28:01.683827] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:41.048 [2024-12-14 22:28:01.685617] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.048 passed 00:18:41.048 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-14 22:28:01.760133] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.048 [2024-12-14 22:28:01.761363] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:41.048 [2024-12-14 22:28:01.763152] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.048 passed 00:18:41.048 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-14 22:28:01.837989] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.048 [2024-12-14 22:28:01.922915] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:41.307 [2024-12-14 22:28:01.938910] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:41.307 [2024-12-14 22:28:01.944008] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.307 passed 00:18:41.307 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-14 22:28:02.019509] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.307 [2024-12-14 22:28:02.020736] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:41.307 [2024-12-14 22:28:02.022534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.307 passed 00:18:41.307 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-14 22:28:02.101113] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.307 [2024-12-14 22:28:02.177914] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:41.566 [2024-12-14 22:28:02.201911] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:41.566 [2024-12-14 22:28:02.206988] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.566 passed 00:18:41.566 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-14 22:28:02.279654] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.566 [2024-12-14 22:28:02.280880] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:41.566 [2024-12-14 22:28:02.280911] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:41.566 [2024-12-14 22:28:02.284686] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.566 passed 00:18:41.566 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-14 22:28:02.358207] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.566 [2024-12-14 22:28:02.449938] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:41.825 [2024-12-14 22:28:02.457910] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:41.825 [2024-12-14 22:28:02.465920] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:41.825 [2024-12-14 22:28:02.473912] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:41.825 [2024-12-14 22:28:02.502991] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.825 passed 00:18:41.825 Test: admin_create_io_sq_verify_pc ...[2024-12-14 22:28:02.576647] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.825 [2024-12-14 22:28:02.591913] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:41.825 [2024-12-14 22:28:02.609824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.825 passed 00:18:41.825 Test: admin_create_io_qp_max_qps ...[2024-12-14 22:28:02.685370] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.201 [2024-12-14 22:28:03.786912] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:43.460 [2024-12-14 22:28:04.166227] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.460 passed 00:18:43.460 Test: admin_create_io_sq_shared_cq ...[2024-12-14 22:28:04.240055] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.719 [2024-12-14 22:28:04.372920] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:43.719 [2024-12-14 22:28:04.409972] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.719 passed 00:18:43.719 00:18:43.719 Run Summary: Type Total Ran Passed Failed Inactive 00:18:43.719 suites 1 1 n/a 0 0 00:18:43.719 tests 18 18 18 0 0 00:18:43.719 asserts 360 360 360 0 n/a 00:18:43.719 00:18:43.719 Elapsed time = 1.492 seconds 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 305140 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 305140 ']' 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 305140 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305140 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305140' 00:18:43.719 killing process with pid 305140 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 305140 00:18:43.719 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 305140 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:43.979 00:18:43.979 real 0m5.606s 00:18:43.979 user 0m15.706s 00:18:43.979 sys 0m0.515s 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.979 ************************************ 00:18:43.979 END TEST nvmf_vfio_user_nvme_compliance 00:18:43.979 ************************************ 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.979 ************************************ 00:18:43.979 START TEST nvmf_vfio_user_fuzz 00:18:43.979 ************************************ 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:43.979 * Looking for test storage... 00:18:43.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.979 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.239 --rc genhtml_branch_coverage=1 00:18:44.239 --rc genhtml_function_coverage=1 00:18:44.239 --rc genhtml_legend=1 00:18:44.239 --rc geninfo_all_blocks=1 00:18:44.239 --rc geninfo_unexecuted_blocks=1 00:18:44.239 00:18:44.239 ' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.239 --rc genhtml_branch_coverage=1 00:18:44.239 --rc genhtml_function_coverage=1 00:18:44.239 --rc genhtml_legend=1 00:18:44.239 --rc geninfo_all_blocks=1 00:18:44.239 --rc geninfo_unexecuted_blocks=1 00:18:44.239 00:18:44.239 ' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.239 --rc genhtml_branch_coverage=1 00:18:44.239 --rc genhtml_function_coverage=1 00:18:44.239 --rc genhtml_legend=1 00:18:44.239 --rc geninfo_all_blocks=1 00:18:44.239 --rc geninfo_unexecuted_blocks=1 00:18:44.239 00:18:44.239 ' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.239 --rc genhtml_branch_coverage=1 00:18:44.239 --rc genhtml_function_coverage=1 00:18:44.239 --rc genhtml_legend=1 00:18:44.239 --rc geninfo_all_blocks=1 00:18:44.239 --rc geninfo_unexecuted_blocks=1 00:18:44.239 00:18:44.239 ' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.239 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=306095 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 306095' 00:18:44.240 Process pid: 306095 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 306095 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 306095 ']' 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.240 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.498 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.498 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:44.499 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 malloc0 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:45.435 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:17.510 Fuzzing completed. Shutting down the fuzz application 00:19:17.510 00:19:17.510 Dumping successful admin opcodes: 00:19:17.510 9, 10, 00:19:17.510 Dumping successful io opcodes: 00:19:17.510 0, 00:19:17.510 NS: 0x20000081ef00 I/O qp, Total commands completed: 1092605, total successful commands: 4303, random_seed: 681004928 00:19:17.510 NS: 0x20000081ef00 admin qp, Total commands completed: 267808, total successful commands: 62, random_seed: 417067200 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 306095 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 306095 ']' 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 306095 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306095 00:19:17.510 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306095' 00:19:17.511 killing process with pid 306095 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 306095 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 306095 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:17.511 00:19:17.511 real 0m32.175s 00:19:17.511 user 0m32.344s 00:19:17.511 sys 0m28.807s 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:17.511 ************************************ 00:19:17.511 END TEST nvmf_vfio_user_fuzz 00:19:17.511 ************************************ 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.511 ************************************ 00:19:17.511 START TEST nvmf_auth_target 00:19:17.511 ************************************ 00:19:17.511 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.511 * Looking for test storage... 00:19:17.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.511 --rc genhtml_branch_coverage=1 00:19:17.511 --rc genhtml_function_coverage=1 00:19:17.511 --rc genhtml_legend=1 00:19:17.511 --rc geninfo_all_blocks=1 00:19:17.511 --rc geninfo_unexecuted_blocks=1 00:19:17.511 00:19:17.511 ' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.511 --rc genhtml_branch_coverage=1 00:19:17.511 --rc genhtml_function_coverage=1 00:19:17.511 --rc genhtml_legend=1 00:19:17.511 --rc geninfo_all_blocks=1 00:19:17.511 --rc geninfo_unexecuted_blocks=1 00:19:17.511 00:19:17.511 ' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.511 --rc genhtml_branch_coverage=1 00:19:17.511 --rc genhtml_function_coverage=1 00:19:17.511 --rc genhtml_legend=1 00:19:17.511 --rc geninfo_all_blocks=1 00:19:17.511 --rc geninfo_unexecuted_blocks=1 00:19:17.511 00:19:17.511 ' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.511 --rc genhtml_branch_coverage=1 00:19:17.511 --rc genhtml_function_coverage=1 00:19:17.511 --rc genhtml_legend=1 00:19:17.511 --rc geninfo_all_blocks=1 00:19:17.511 --rc geninfo_unexecuted_blocks=1 00:19:17.511 00:19:17.511 ' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.511 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.512 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:22.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:22.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.787 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:22.788 Found net devices under 0000:af:00.0: cvl_0_0 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:22.788 Found net devices under 0000:af:00.1: cvl_0_1 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.788 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:22.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:19:22.788 00:19:22.788 --- 10.0.0.2 ping statistics --- 00:19:22.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.788 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:22.788 00:19:22.788 --- 10.0.0.1 ping statistics --- 00:19:22.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.788 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=314340 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 314340 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314340 ']' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=314431 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=07476d92c84689191faed25a1e29c1ce0bc7d6e417e465d4 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3pz 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 07476d92c84689191faed25a1e29c1ce0bc7d6e417e465d4 0 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 07476d92c84689191faed25a1e29c1ce0bc7d6e417e465d4 0 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.788 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=07476d92c84689191faed25a1e29c1ce0bc7d6e417e465d4 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3pz 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3pz 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.3pz 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df487c3c3d78ee4bb1e5caf6ca3f958b98965ed4a8c454002399c5bf4ecec7ee 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9Um 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df487c3c3d78ee4bb1e5caf6ca3f958b98965ed4a8c454002399c5bf4ecec7ee 3 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df487c3c3d78ee4bb1e5caf6ca3f958b98965ed4a8c454002399c5bf4ecec7ee 3 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df487c3c3d78ee4bb1e5caf6ca3f958b98965ed4a8c454002399c5bf4ecec7ee 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9Um 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9Um 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.9Um 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eb67595f91d580132ab3b3ff3bd46a93 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BLx 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eb67595f91d580132ab3b3ff3bd46a93 1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eb67595f91d580132ab3b3ff3bd46a93 1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eb67595f91d580132ab3b3ff3bd46a93 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BLx 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BLx 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BLx 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fd8800f54284ed86427a0ed487fe7704755e37d667c087a3 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vvG 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fd8800f54284ed86427a0ed487fe7704755e37d667c087a3 2 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fd8800f54284ed86427a0ed487fe7704755e37d667c087a3 2 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fd8800f54284ed86427a0ed487fe7704755e37d667c087a3 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:22.789 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vvG 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vvG 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.vvG 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d02dad69f52052e607780404f20a1d9dc8aba9a0f535af4 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pQp 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d02dad69f52052e607780404f20a1d9dc8aba9a0f535af4 2 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d02dad69f52052e607780404f20a1d9dc8aba9a0f535af4 2 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d02dad69f52052e607780404f20a1d9dc8aba9a0f535af4 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pQp 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pQp 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.pQp 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.048 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=189a5c5a3d6a7229023bf7408bf3994e 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TxB 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 189a5c5a3d6a7229023bf7408bf3994e 1 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 189a5c5a3d6a7229023bf7408bf3994e 1 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=189a5c5a3d6a7229023bf7408bf3994e 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TxB 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TxB 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.TxB 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=41d8ab72f70f8c91cc0c2cf57a14b9360d30951695bbe25e2dd612168ecacec8 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.l9G 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 41d8ab72f70f8c91cc0c2cf57a14b9360d30951695bbe25e2dd612168ecacec8 3 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 41d8ab72f70f8c91cc0c2cf57a14b9360d30951695bbe25e2dd612168ecacec8 3 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=41d8ab72f70f8c91cc0c2cf57a14b9360d30951695bbe25e2dd612168ecacec8 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.l9G 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.l9G 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.l9G 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 314340 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314340 ']' 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.049 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 314431 /var/tmp/host.sock 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314431 ']' 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:23.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.308 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3pz 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3pz 00:19:23.567 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3pz 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.9Um ]] 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Um 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Um 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Um 00:19:23.826 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BLx 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BLx 00:19:23.827 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BLx 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.vvG ]] 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vvG 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vvG 00:19:24.085 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vvG 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pQp 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pQp 00:19:24.344 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pQp 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.TxB ]] 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TxB 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TxB 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TxB 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.l9G 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.l9G 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.l9G 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.862 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.120 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.121 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.121 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.379 00:19:25.379 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.379 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.379 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.638 { 00:19:25.638 "cntlid": 1, 00:19:25.638 "qid": 0, 00:19:25.638 "state": "enabled", 00:19:25.638 "thread": "nvmf_tgt_poll_group_000", 00:19:25.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:25.638 "listen_address": { 00:19:25.638 "trtype": "TCP", 00:19:25.638 "adrfam": "IPv4", 00:19:25.638 "traddr": "10.0.0.2", 00:19:25.638 "trsvcid": "4420" 00:19:25.638 }, 00:19:25.638 "peer_address": { 00:19:25.638 "trtype": "TCP", 00:19:25.638 "adrfam": "IPv4", 00:19:25.638 "traddr": "10.0.0.1", 00:19:25.638 "trsvcid": "39672" 00:19:25.638 }, 00:19:25.638 "auth": { 00:19:25.638 "state": "completed", 00:19:25.638 "digest": "sha256", 00:19:25.638 "dhgroup": "null" 00:19:25.638 } 00:19:25.638 } 00:19:25.638 ]' 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.638 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.896 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:25.896 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.184 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.443 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.702 00:19:29.702 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.702 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.702 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.961 { 00:19:29.961 "cntlid": 3, 00:19:29.961 "qid": 0, 00:19:29.961 "state": "enabled", 00:19:29.961 "thread": "nvmf_tgt_poll_group_000", 00:19:29.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:29.961 "listen_address": { 00:19:29.961 "trtype": "TCP", 00:19:29.961 "adrfam": "IPv4", 00:19:29.961 "traddr": "10.0.0.2", 00:19:29.961 "trsvcid": "4420" 00:19:29.961 }, 00:19:29.961 "peer_address": { 00:19:29.961 "trtype": "TCP", 00:19:29.961 "adrfam": "IPv4", 00:19:29.961 "traddr": "10.0.0.1", 00:19:29.961 "trsvcid": "60874" 00:19:29.961 }, 00:19:29.961 "auth": { 00:19:29.961 "state": "completed", 00:19:29.961 "digest": "sha256", 00:19:29.961 "dhgroup": "null" 00:19:29.961 } 00:19:29.961 } 00:19:29.961 ]' 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.961 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.221 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:30.221 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.789 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.048 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.307 00:19:31.308 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.308 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.308 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.308 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.308 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.308 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.567 { 00:19:31.567 "cntlid": 5, 00:19:31.567 "qid": 0, 00:19:31.567 "state": "enabled", 00:19:31.567 "thread": "nvmf_tgt_poll_group_000", 00:19:31.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:31.567 "listen_address": { 00:19:31.567 "trtype": "TCP", 00:19:31.567 "adrfam": "IPv4", 00:19:31.567 "traddr": "10.0.0.2", 00:19:31.567 "trsvcid": "4420" 00:19:31.567 }, 00:19:31.567 "peer_address": { 00:19:31.567 "trtype": "TCP", 00:19:31.567 "adrfam": "IPv4", 00:19:31.567 "traddr": "10.0.0.1", 00:19:31.567 "trsvcid": "60902" 00:19:31.567 }, 00:19:31.567 "auth": { 00:19:31.567 "state": "completed", 00:19:31.567 "digest": "sha256", 00:19:31.567 "dhgroup": "null" 00:19:31.567 } 00:19:31.567 } 00:19:31.567 ]' 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.567 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.825 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:31.825 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.392 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.651 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.910 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.910 { 00:19:32.910 "cntlid": 7, 00:19:32.910 "qid": 0, 00:19:32.910 "state": "enabled", 00:19:32.910 "thread": "nvmf_tgt_poll_group_000", 00:19:32.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:32.910 "listen_address": { 00:19:32.910 "trtype": "TCP", 00:19:32.910 "adrfam": "IPv4", 00:19:32.910 "traddr": "10.0.0.2", 00:19:32.910 "trsvcid": "4420" 00:19:32.910 }, 00:19:32.910 "peer_address": { 00:19:32.910 "trtype": "TCP", 00:19:32.910 "adrfam": "IPv4", 00:19:32.910 "traddr": "10.0.0.1", 00:19:32.910 "trsvcid": "60934" 00:19:32.910 }, 00:19:32.910 "auth": { 00:19:32.910 "state": "completed", 00:19:32.910 "digest": "sha256", 00:19:32.910 "dhgroup": "null" 00:19:32.910 } 00:19:32.910 } 00:19:32.910 ]' 00:19:32.910 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.169 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.429 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:33.429 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.996 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.997 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.997 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.997 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.997 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.997 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.255 00:19:34.255 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.255 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.255 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.514 { 00:19:34.514 "cntlid": 9, 00:19:34.514 "qid": 0, 00:19:34.514 "state": "enabled", 00:19:34.514 "thread": "nvmf_tgt_poll_group_000", 00:19:34.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:34.514 "listen_address": { 00:19:34.514 "trtype": "TCP", 00:19:34.514 "adrfam": "IPv4", 00:19:34.514 "traddr": "10.0.0.2", 00:19:34.514 "trsvcid": "4420" 00:19:34.514 }, 00:19:34.514 "peer_address": { 00:19:34.514 "trtype": "TCP", 00:19:34.514 "adrfam": "IPv4", 00:19:34.514 "traddr": "10.0.0.1", 00:19:34.514 "trsvcid": "60956" 00:19:34.514 }, 00:19:34.514 "auth": { 00:19:34.514 "state": "completed", 00:19:34.514 "digest": "sha256", 00:19:34.514 "dhgroup": "ffdhe2048" 00:19:34.514 } 00:19:34.514 } 00:19:34.514 ]' 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.514 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.773 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.773 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.773 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.773 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.773 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.031 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:35.031 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:35.598 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.598 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.598 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.598 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.599 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.857 00:19:35.857 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.857 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.857 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.116 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.116 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.116 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.116 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.116 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.117 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.117 { 00:19:36.117 "cntlid": 11, 00:19:36.117 "qid": 0, 00:19:36.117 "state": "enabled", 00:19:36.117 "thread": "nvmf_tgt_poll_group_000", 00:19:36.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:36.117 "listen_address": { 00:19:36.117 "trtype": "TCP", 00:19:36.117 "adrfam": "IPv4", 00:19:36.117 "traddr": "10.0.0.2", 00:19:36.117 "trsvcid": "4420" 00:19:36.117 }, 00:19:36.117 "peer_address": { 00:19:36.117 "trtype": "TCP", 00:19:36.117 "adrfam": "IPv4", 00:19:36.117 "traddr": "10.0.0.1", 00:19:36.117 "trsvcid": "60982" 00:19:36.117 }, 00:19:36.117 "auth": { 00:19:36.117 "state": "completed", 00:19:36.117 "digest": "sha256", 00:19:36.117 "dhgroup": "ffdhe2048" 00:19:36.117 } 00:19:36.117 } 00:19:36.117 ]' 00:19:36.117 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.117 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.117 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:36.376 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.944 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.203 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.203 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.203 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.203 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.203 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.462 00:19:37.462 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.462 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.462 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.720 { 00:19:37.720 "cntlid": 13, 00:19:37.720 "qid": 0, 00:19:37.720 "state": "enabled", 00:19:37.720 "thread": "nvmf_tgt_poll_group_000", 00:19:37.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:37.720 "listen_address": { 00:19:37.720 "trtype": "TCP", 00:19:37.720 "adrfam": "IPv4", 00:19:37.720 "traddr": "10.0.0.2", 00:19:37.720 "trsvcid": "4420" 00:19:37.720 }, 00:19:37.720 "peer_address": { 00:19:37.720 "trtype": "TCP", 00:19:37.720 "adrfam": "IPv4", 00:19:37.720 "traddr": "10.0.0.1", 00:19:37.720 "trsvcid": "32782" 00:19:37.720 }, 00:19:37.720 "auth": { 00:19:37.720 "state": "completed", 00:19:37.720 "digest": "sha256", 00:19:37.720 "dhgroup": "ffdhe2048" 00:19:37.720 } 00:19:37.720 } 00:19:37.720 ]' 00:19:37.720 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.721 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.721 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.721 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.721 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.980 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.980 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.980 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.980 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:37.980 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.547 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.548 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.807 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.065 00:19:39.066 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.066 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.066 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.325 { 00:19:39.325 "cntlid": 15, 00:19:39.325 "qid": 0, 00:19:39.325 "state": "enabled", 00:19:39.325 "thread": "nvmf_tgt_poll_group_000", 00:19:39.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:39.325 "listen_address": { 00:19:39.325 "trtype": "TCP", 00:19:39.325 "adrfam": "IPv4", 00:19:39.325 "traddr": "10.0.0.2", 00:19:39.325 "trsvcid": "4420" 00:19:39.325 }, 00:19:39.325 "peer_address": { 00:19:39.325 "trtype": "TCP", 00:19:39.325 "adrfam": "IPv4", 00:19:39.325 "traddr": "10.0.0.1", 00:19:39.325 "trsvcid": "35924" 00:19:39.325 }, 00:19:39.325 "auth": { 00:19:39.325 "state": "completed", 00:19:39.325 "digest": "sha256", 00:19:39.325 "dhgroup": "ffdhe2048" 00:19:39.325 } 00:19:39.325 } 00:19:39.325 ]' 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.325 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.583 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:39.583 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.150 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.407 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.408 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.408 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.408 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.408 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.666 00:19:40.666 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.666 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.666 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.925 { 00:19:40.925 "cntlid": 17, 00:19:40.925 "qid": 0, 00:19:40.925 "state": "enabled", 00:19:40.925 "thread": "nvmf_tgt_poll_group_000", 00:19:40.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:40.925 "listen_address": { 00:19:40.925 "trtype": "TCP", 00:19:40.925 "adrfam": "IPv4", 00:19:40.925 "traddr": "10.0.0.2", 00:19:40.925 "trsvcid": "4420" 00:19:40.925 }, 00:19:40.925 "peer_address": { 00:19:40.925 "trtype": "TCP", 00:19:40.925 "adrfam": "IPv4", 00:19:40.925 "traddr": "10.0.0.1", 00:19:40.925 "trsvcid": "35952" 00:19:40.925 }, 00:19:40.925 "auth": { 00:19:40.925 "state": "completed", 00:19:40.925 "digest": "sha256", 00:19:40.925 "dhgroup": "ffdhe3072" 00:19:40.925 } 00:19:40.925 } 00:19:40.925 ]' 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.925 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.184 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:41.184 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.751 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.010 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.270 00:19:42.270 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.270 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.270 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.529 { 00:19:42.529 "cntlid": 19, 00:19:42.529 "qid": 0, 00:19:42.529 "state": "enabled", 00:19:42.529 "thread": "nvmf_tgt_poll_group_000", 00:19:42.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.529 "listen_address": { 00:19:42.529 "trtype": "TCP", 00:19:42.529 "adrfam": "IPv4", 00:19:42.529 "traddr": "10.0.0.2", 00:19:42.529 "trsvcid": "4420" 00:19:42.529 }, 00:19:42.529 "peer_address": { 00:19:42.529 "trtype": "TCP", 00:19:42.529 "adrfam": "IPv4", 00:19:42.529 "traddr": "10.0.0.1", 00:19:42.529 "trsvcid": "35968" 00:19:42.529 }, 00:19:42.529 "auth": { 00:19:42.529 "state": "completed", 00:19:42.529 "digest": "sha256", 00:19:42.529 "dhgroup": "ffdhe3072" 00:19:42.529 } 00:19:42.529 } 00:19:42.529 ]' 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.529 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.788 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:42.788 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.356 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.615 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.874 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.874 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.133 { 00:19:44.133 "cntlid": 21, 00:19:44.133 "qid": 0, 00:19:44.133 "state": "enabled", 00:19:44.133 "thread": "nvmf_tgt_poll_group_000", 00:19:44.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:44.133 "listen_address": { 00:19:44.133 "trtype": "TCP", 00:19:44.133 "adrfam": "IPv4", 00:19:44.133 "traddr": "10.0.0.2", 00:19:44.133 "trsvcid": "4420" 00:19:44.133 }, 00:19:44.133 "peer_address": { 00:19:44.133 "trtype": "TCP", 00:19:44.133 "adrfam": "IPv4", 00:19:44.133 "traddr": "10.0.0.1", 00:19:44.133 "trsvcid": "36000" 00:19:44.133 }, 00:19:44.133 "auth": { 00:19:44.133 "state": "completed", 00:19:44.133 "digest": "sha256", 00:19:44.133 "dhgroup": "ffdhe3072" 00:19:44.133 } 00:19:44.133 } 00:19:44.133 ]' 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.133 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.392 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:44.392 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.960 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.219 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.219 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.219 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.219 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.478 { 00:19:45.478 "cntlid": 23, 00:19:45.478 "qid": 0, 00:19:45.478 "state": "enabled", 00:19:45.478 "thread": "nvmf_tgt_poll_group_000", 00:19:45.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.478 "listen_address": { 00:19:45.478 "trtype": "TCP", 00:19:45.478 "adrfam": "IPv4", 00:19:45.478 "traddr": "10.0.0.2", 00:19:45.478 "trsvcid": "4420" 00:19:45.478 }, 00:19:45.478 "peer_address": { 00:19:45.478 "trtype": "TCP", 00:19:45.478 "adrfam": "IPv4", 00:19:45.478 "traddr": "10.0.0.1", 00:19:45.478 "trsvcid": "36034" 00:19:45.478 }, 00:19:45.478 "auth": { 00:19:45.478 "state": "completed", 00:19:45.478 "digest": "sha256", 00:19:45.478 "dhgroup": "ffdhe3072" 00:19:45.478 } 00:19:45.478 } 00:19:45.478 ]' 00:19:45.478 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.736 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.995 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:45.995 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:46.562 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.562 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.562 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.562 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.563 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.822 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.081 { 00:19:47.081 "cntlid": 25, 00:19:47.081 "qid": 0, 00:19:47.081 "state": "enabled", 00:19:47.081 "thread": "nvmf_tgt_poll_group_000", 00:19:47.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:47.081 "listen_address": { 00:19:47.081 "trtype": "TCP", 00:19:47.081 "adrfam": "IPv4", 00:19:47.081 "traddr": "10.0.0.2", 00:19:47.081 "trsvcid": "4420" 00:19:47.081 }, 00:19:47.081 "peer_address": { 00:19:47.081 "trtype": "TCP", 00:19:47.081 "adrfam": "IPv4", 00:19:47.081 "traddr": "10.0.0.1", 00:19:47.081 "trsvcid": "36056" 00:19:47.081 }, 00:19:47.081 "auth": { 00:19:47.081 "state": "completed", 00:19:47.081 "digest": "sha256", 00:19:47.081 "dhgroup": "ffdhe4096" 00:19:47.081 } 00:19:47.081 } 00:19:47.081 ]' 00:19:47.081 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.340 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.340 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.340 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.340 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.340 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.340 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.340 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.598 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:47.599 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.166 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.426 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.684 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.684 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.943 { 00:19:48.943 "cntlid": 27, 00:19:48.943 "qid": 0, 00:19:48.943 "state": "enabled", 00:19:48.943 "thread": "nvmf_tgt_poll_group_000", 00:19:48.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.943 "listen_address": { 00:19:48.943 "trtype": "TCP", 00:19:48.943 "adrfam": "IPv4", 00:19:48.943 "traddr": "10.0.0.2", 00:19:48.943 "trsvcid": "4420" 00:19:48.943 }, 00:19:48.943 "peer_address": { 00:19:48.943 "trtype": "TCP", 00:19:48.943 "adrfam": "IPv4", 00:19:48.943 "traddr": "10.0.0.1", 00:19:48.943 "trsvcid": "36090" 00:19:48.943 }, 00:19:48.943 "auth": { 00:19:48.943 "state": "completed", 00:19:48.943 "digest": "sha256", 00:19:48.943 "dhgroup": "ffdhe4096" 00:19:48.943 } 00:19:48.943 } 00:19:48.943 ]' 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.943 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.202 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:49.202 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.770 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.029 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.288 00:19:50.288 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.288 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.288 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.546 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.546 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.547 { 00:19:50.547 "cntlid": 29, 00:19:50.547 "qid": 0, 00:19:50.547 "state": "enabled", 00:19:50.547 "thread": "nvmf_tgt_poll_group_000", 00:19:50.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.547 "listen_address": { 00:19:50.547 "trtype": "TCP", 00:19:50.547 "adrfam": "IPv4", 00:19:50.547 "traddr": "10.0.0.2", 00:19:50.547 "trsvcid": "4420" 00:19:50.547 }, 00:19:50.547 "peer_address": { 00:19:50.547 "trtype": "TCP", 00:19:50.547 "adrfam": "IPv4", 00:19:50.547 "traddr": "10.0.0.1", 00:19:50.547 "trsvcid": "49008" 00:19:50.547 }, 00:19:50.547 "auth": { 00:19:50.547 "state": "completed", 00:19:50.547 "digest": "sha256", 00:19:50.547 "dhgroup": "ffdhe4096" 00:19:50.547 } 00:19:50.547 } 00:19:50.547 ]' 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.547 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.805 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:50.805 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.372 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.631 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.891 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.891 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.891 { 00:19:51.891 "cntlid": 31, 00:19:51.891 "qid": 0, 00:19:51.891 "state": "enabled", 00:19:51.891 "thread": "nvmf_tgt_poll_group_000", 00:19:51.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.891 "listen_address": { 00:19:51.891 "trtype": "TCP", 00:19:51.891 "adrfam": "IPv4", 00:19:51.891 "traddr": "10.0.0.2", 00:19:51.891 "trsvcid": "4420" 00:19:51.891 }, 00:19:51.891 "peer_address": { 00:19:51.891 "trtype": "TCP", 00:19:51.891 "adrfam": "IPv4", 00:19:51.891 "traddr": "10.0.0.1", 00:19:51.891 "trsvcid": "49030" 00:19:51.891 }, 00:19:51.891 "auth": { 00:19:51.891 "state": "completed", 00:19:51.891 "digest": "sha256", 00:19:51.891 "dhgroup": "ffdhe4096" 00:19:51.891 } 00:19:51.891 } 00:19:51.891 ]' 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.150 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.447 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:52.447 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.014 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.015 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.583 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.583 { 00:19:53.583 "cntlid": 33, 00:19:53.583 "qid": 0, 00:19:53.583 "state": "enabled", 00:19:53.583 "thread": "nvmf_tgt_poll_group_000", 00:19:53.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.583 "listen_address": { 00:19:53.583 "trtype": "TCP", 00:19:53.583 "adrfam": "IPv4", 00:19:53.583 "traddr": "10.0.0.2", 00:19:53.583 "trsvcid": "4420" 00:19:53.583 }, 00:19:53.583 "peer_address": { 00:19:53.583 "trtype": "TCP", 00:19:53.583 "adrfam": "IPv4", 00:19:53.583 "traddr": "10.0.0.1", 00:19:53.583 "trsvcid": "49064" 00:19:53.583 }, 00:19:53.583 "auth": { 00:19:53.583 "state": "completed", 00:19:53.583 "digest": "sha256", 00:19:53.583 "dhgroup": "ffdhe6144" 00:19:53.583 } 00:19:53.583 } 00:19:53.583 ]' 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.583 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.842 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.842 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.842 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.842 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.842 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.101 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:54.101 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.670 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.671 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.238 00:19:55.238 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.238 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.238 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.238 { 00:19:55.238 "cntlid": 35, 00:19:55.238 "qid": 0, 00:19:55.238 "state": "enabled", 00:19:55.238 "thread": "nvmf_tgt_poll_group_000", 00:19:55.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.238 "listen_address": { 00:19:55.238 "trtype": "TCP", 00:19:55.238 "adrfam": "IPv4", 00:19:55.238 "traddr": "10.0.0.2", 00:19:55.238 "trsvcid": "4420" 00:19:55.238 }, 00:19:55.238 "peer_address": { 00:19:55.238 "trtype": "TCP", 00:19:55.238 "adrfam": "IPv4", 00:19:55.238 "traddr": "10.0.0.1", 00:19:55.238 "trsvcid": "49090" 00:19:55.238 }, 00:19:55.238 "auth": { 00:19:55.238 "state": "completed", 00:19:55.238 "digest": "sha256", 00:19:55.238 "dhgroup": "ffdhe6144" 00:19:55.238 } 00:19:55.238 } 00:19:55.238 ]' 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.238 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:55.497 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:19:56.064 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.323 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.323 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:56.323 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.323 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.323 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.324 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.891 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.891 { 00:19:56.891 "cntlid": 37, 00:19:56.891 "qid": 0, 00:19:56.891 "state": "enabled", 00:19:56.891 "thread": "nvmf_tgt_poll_group_000", 00:19:56.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.891 "listen_address": { 00:19:56.891 "trtype": "TCP", 00:19:56.891 "adrfam": "IPv4", 00:19:56.891 "traddr": "10.0.0.2", 00:19:56.891 "trsvcid": "4420" 00:19:56.891 }, 00:19:56.891 "peer_address": { 00:19:56.891 "trtype": "TCP", 00:19:56.891 "adrfam": "IPv4", 00:19:56.891 "traddr": "10.0.0.1", 00:19:56.891 "trsvcid": "49128" 00:19:56.891 }, 00:19:56.891 "auth": { 00:19:56.891 "state": "completed", 00:19:56.891 "digest": "sha256", 00:19:56.891 "dhgroup": "ffdhe6144" 00:19:56.891 } 00:19:56.891 } 00:19:56.891 ]' 00:19:56.891 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.150 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.409 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:57.409 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.976 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.544 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.544 { 00:19:58.544 "cntlid": 39, 00:19:58.544 "qid": 0, 00:19:58.544 "state": "enabled", 00:19:58.544 "thread": "nvmf_tgt_poll_group_000", 00:19:58.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.544 "listen_address": { 00:19:58.544 "trtype": "TCP", 00:19:58.544 "adrfam": "IPv4", 00:19:58.544 "traddr": "10.0.0.2", 00:19:58.544 "trsvcid": "4420" 00:19:58.544 }, 00:19:58.544 "peer_address": { 00:19:58.544 "trtype": "TCP", 00:19:58.544 "adrfam": "IPv4", 00:19:58.544 "traddr": "10.0.0.1", 00:19:58.544 "trsvcid": "49142" 00:19:58.544 }, 00:19:58.544 "auth": { 00:19:58.544 "state": "completed", 00:19:58.544 "digest": "sha256", 00:19:58.544 "dhgroup": "ffdhe6144" 00:19:58.544 } 00:19:58.544 } 00:19:58.544 ]' 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.544 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.802 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.061 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:59.061 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:19:59.628 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.628 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.629 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.197 00:20:00.197 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.197 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.197 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.455 { 00:20:00.455 "cntlid": 41, 00:20:00.455 "qid": 0, 00:20:00.455 "state": "enabled", 00:20:00.455 "thread": "nvmf_tgt_poll_group_000", 00:20:00.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.455 "listen_address": { 00:20:00.455 "trtype": "TCP", 00:20:00.455 "adrfam": "IPv4", 00:20:00.455 "traddr": "10.0.0.2", 00:20:00.455 "trsvcid": "4420" 00:20:00.455 }, 00:20:00.455 "peer_address": { 00:20:00.455 "trtype": "TCP", 00:20:00.455 "adrfam": "IPv4", 00:20:00.455 "traddr": "10.0.0.1", 00:20:00.455 "trsvcid": "40484" 00:20:00.455 }, 00:20:00.455 "auth": { 00:20:00.455 "state": "completed", 00:20:00.455 "digest": "sha256", 00:20:00.455 "dhgroup": "ffdhe8192" 00:20:00.455 } 00:20:00.455 } 00:20:00.455 ]' 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.455 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.714 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:00.714 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:01.282 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.283 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.542 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.110 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.110 { 00:20:02.110 "cntlid": 43, 00:20:02.110 "qid": 0, 00:20:02.110 "state": "enabled", 00:20:02.110 "thread": "nvmf_tgt_poll_group_000", 00:20:02.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.110 "listen_address": { 00:20:02.110 "trtype": "TCP", 00:20:02.110 "adrfam": "IPv4", 00:20:02.110 "traddr": "10.0.0.2", 00:20:02.110 "trsvcid": "4420" 00:20:02.110 }, 00:20:02.110 "peer_address": { 00:20:02.110 "trtype": "TCP", 00:20:02.110 "adrfam": "IPv4", 00:20:02.110 "traddr": "10.0.0.1", 00:20:02.110 "trsvcid": "40502" 00:20:02.110 }, 00:20:02.110 "auth": { 00:20:02.110 "state": "completed", 00:20:02.110 "digest": "sha256", 00:20:02.110 "dhgroup": "ffdhe8192" 00:20:02.110 } 00:20:02.110 } 00:20:02.110 ]' 00:20:02.110 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.369 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:02.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.195 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.195 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.454 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.454 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.454 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.455 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.713 00:20:03.713 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.713 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.713 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.972 { 00:20:03.972 "cntlid": 45, 00:20:03.972 "qid": 0, 00:20:03.972 "state": "enabled", 00:20:03.972 "thread": "nvmf_tgt_poll_group_000", 00:20:03.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.972 "listen_address": { 00:20:03.972 "trtype": "TCP", 00:20:03.972 "adrfam": "IPv4", 00:20:03.972 "traddr": "10.0.0.2", 00:20:03.972 "trsvcid": "4420" 00:20:03.972 }, 00:20:03.972 "peer_address": { 00:20:03.972 "trtype": "TCP", 00:20:03.972 "adrfam": "IPv4", 00:20:03.972 "traddr": "10.0.0.1", 00:20:03.972 "trsvcid": "40534" 00:20:03.972 }, 00:20:03.972 "auth": { 00:20:03.972 "state": "completed", 00:20:03.972 "digest": "sha256", 00:20:03.972 "dhgroup": "ffdhe8192" 00:20:03.972 } 00:20:03.972 } 00:20:03.972 ]' 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.972 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.231 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.231 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.231 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.231 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.231 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.490 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:04.490 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.058 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.626 00:20:05.626 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.626 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.626 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.895 { 00:20:05.895 "cntlid": 47, 00:20:05.895 "qid": 0, 00:20:05.895 "state": "enabled", 00:20:05.895 "thread": "nvmf_tgt_poll_group_000", 00:20:05.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.895 "listen_address": { 00:20:05.895 "trtype": "TCP", 00:20:05.895 "adrfam": "IPv4", 00:20:05.895 "traddr": "10.0.0.2", 00:20:05.895 "trsvcid": "4420" 00:20:05.895 }, 00:20:05.895 "peer_address": { 00:20:05.895 "trtype": "TCP", 00:20:05.895 "adrfam": "IPv4", 00:20:05.895 "traddr": "10.0.0.1", 00:20:05.895 "trsvcid": "40548" 00:20:05.895 }, 00:20:05.895 "auth": { 00:20:05.895 "state": "completed", 00:20:05.895 "digest": "sha256", 00:20:05.895 "dhgroup": "ffdhe8192" 00:20:05.895 } 00:20:05.895 } 00:20:05.895 ]' 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.895 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.160 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:06.160 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.728 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.986 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.987 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.245 00:20:07.245 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.245 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.245 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.504 { 00:20:07.504 "cntlid": 49, 00:20:07.504 "qid": 0, 00:20:07.504 "state": "enabled", 00:20:07.504 "thread": "nvmf_tgt_poll_group_000", 00:20:07.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.504 "listen_address": { 00:20:07.504 "trtype": "TCP", 00:20:07.504 "adrfam": "IPv4", 00:20:07.504 "traddr": "10.0.0.2", 00:20:07.504 "trsvcid": "4420" 00:20:07.504 }, 00:20:07.504 "peer_address": { 00:20:07.504 "trtype": "TCP", 00:20:07.504 "adrfam": "IPv4", 00:20:07.504 "traddr": "10.0.0.1", 00:20:07.504 "trsvcid": "40594" 00:20:07.504 }, 00:20:07.504 "auth": { 00:20:07.504 "state": "completed", 00:20:07.504 "digest": "sha384", 00:20:07.504 "dhgroup": "null" 00:20:07.504 } 00:20:07.504 } 00:20:07.504 ]' 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.504 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.763 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:07.763 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.331 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.590 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.849 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.849 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.107 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.107 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.107 { 00:20:09.107 "cntlid": 51, 00:20:09.107 "qid": 0, 00:20:09.107 "state": "enabled", 00:20:09.107 "thread": "nvmf_tgt_poll_group_000", 00:20:09.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.108 "listen_address": { 00:20:09.108 "trtype": "TCP", 00:20:09.108 "adrfam": "IPv4", 00:20:09.108 "traddr": "10.0.0.2", 00:20:09.108 "trsvcid": "4420" 00:20:09.108 }, 00:20:09.108 "peer_address": { 00:20:09.108 "trtype": "TCP", 00:20:09.108 "adrfam": "IPv4", 00:20:09.108 "traddr": "10.0.0.1", 00:20:09.108 "trsvcid": "60572" 00:20:09.108 }, 00:20:09.108 "auth": { 00:20:09.108 "state": "completed", 00:20:09.108 "digest": "sha384", 00:20:09.108 "dhgroup": "null" 00:20:09.108 } 00:20:09.108 } 00:20:09.108 ]' 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.108 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.366 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:09.366 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.934 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.193 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.453 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.453 { 00:20:10.453 "cntlid": 53, 00:20:10.453 "qid": 0, 00:20:10.453 "state": "enabled", 00:20:10.453 "thread": "nvmf_tgt_poll_group_000", 00:20:10.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.453 "listen_address": { 00:20:10.453 "trtype": "TCP", 00:20:10.453 "adrfam": "IPv4", 00:20:10.453 "traddr": "10.0.0.2", 00:20:10.453 "trsvcid": "4420" 00:20:10.453 }, 00:20:10.453 "peer_address": { 00:20:10.453 "trtype": "TCP", 00:20:10.453 "adrfam": "IPv4", 00:20:10.453 "traddr": "10.0.0.1", 00:20:10.453 "trsvcid": "60588" 00:20:10.453 }, 00:20:10.453 "auth": { 00:20:10.453 "state": "completed", 00:20:10.453 "digest": "sha384", 00:20:10.453 "dhgroup": "null" 00:20:10.453 } 00:20:10.453 } 00:20:10.453 ]' 00:20:10.453 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.712 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.970 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:10.970 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.537 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.796 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.796 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.056 { 00:20:12.056 "cntlid": 55, 00:20:12.056 "qid": 0, 00:20:12.056 "state": "enabled", 00:20:12.056 "thread": "nvmf_tgt_poll_group_000", 00:20:12.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.056 "listen_address": { 00:20:12.056 "trtype": "TCP", 00:20:12.056 "adrfam": "IPv4", 00:20:12.056 "traddr": "10.0.0.2", 00:20:12.056 "trsvcid": "4420" 00:20:12.056 }, 00:20:12.056 "peer_address": { 00:20:12.056 "trtype": "TCP", 00:20:12.056 "adrfam": "IPv4", 00:20:12.056 "traddr": "10.0.0.1", 00:20:12.056 "trsvcid": "60616" 00:20:12.056 }, 00:20:12.056 "auth": { 00:20:12.056 "state": "completed", 00:20:12.056 "digest": "sha384", 00:20:12.056 "dhgroup": "null" 00:20:12.056 } 00:20:12.056 } 00:20:12.056 ]' 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.056 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.314 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.314 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.314 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.314 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.314 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.314 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.573 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:12.573 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.140 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.141 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.141 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.141 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.141 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.141 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.399 00:20:13.399 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.399 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.399 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.658 { 00:20:13.658 "cntlid": 57, 00:20:13.658 "qid": 0, 00:20:13.658 "state": "enabled", 00:20:13.658 "thread": "nvmf_tgt_poll_group_000", 00:20:13.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.658 "listen_address": { 00:20:13.658 "trtype": "TCP", 00:20:13.658 "adrfam": "IPv4", 00:20:13.658 "traddr": "10.0.0.2", 00:20:13.658 "trsvcid": "4420" 00:20:13.658 }, 00:20:13.658 "peer_address": { 00:20:13.658 "trtype": "TCP", 00:20:13.658 "adrfam": "IPv4", 00:20:13.658 "traddr": "10.0.0.1", 00:20:13.658 "trsvcid": "60644" 00:20:13.658 }, 00:20:13.658 "auth": { 00:20:13.658 "state": "completed", 00:20:13.658 "digest": "sha384", 00:20:13.658 "dhgroup": "ffdhe2048" 00:20:13.658 } 00:20:13.658 } 00:20:13.658 ]' 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.658 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.917 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.917 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.917 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.917 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.917 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.176 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:14.176 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.747 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.005 00:20:15.005 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.005 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.005 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.264 { 00:20:15.264 "cntlid": 59, 00:20:15.264 "qid": 0, 00:20:15.264 "state": "enabled", 00:20:15.264 "thread": "nvmf_tgt_poll_group_000", 00:20:15.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.264 "listen_address": { 00:20:15.264 "trtype": "TCP", 00:20:15.264 "adrfam": "IPv4", 00:20:15.264 "traddr": "10.0.0.2", 00:20:15.264 "trsvcid": "4420" 00:20:15.264 }, 00:20:15.264 "peer_address": { 00:20:15.264 "trtype": "TCP", 00:20:15.264 "adrfam": "IPv4", 00:20:15.264 "traddr": "10.0.0.1", 00:20:15.264 "trsvcid": "60672" 00:20:15.264 }, 00:20:15.264 "auth": { 00:20:15.264 "state": "completed", 00:20:15.264 "digest": "sha384", 00:20:15.264 "dhgroup": "ffdhe2048" 00:20:15.264 } 00:20:15.264 } 00:20:15.264 ]' 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.264 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.525 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.525 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.525 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.525 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:15.525 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.092 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.351 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.610 00:20:16.610 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.610 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.610 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.868 { 00:20:16.868 "cntlid": 61, 00:20:16.868 "qid": 0, 00:20:16.868 "state": "enabled", 00:20:16.868 "thread": "nvmf_tgt_poll_group_000", 00:20:16.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.868 "listen_address": { 00:20:16.868 "trtype": "TCP", 00:20:16.868 "adrfam": "IPv4", 00:20:16.868 "traddr": "10.0.0.2", 00:20:16.868 "trsvcid": "4420" 00:20:16.868 }, 00:20:16.868 "peer_address": { 00:20:16.868 "trtype": "TCP", 00:20:16.868 "adrfam": "IPv4", 00:20:16.868 "traddr": "10.0.0.1", 00:20:16.868 "trsvcid": "60686" 00:20:16.868 }, 00:20:16.868 "auth": { 00:20:16.868 "state": "completed", 00:20:16.868 "digest": "sha384", 00:20:16.868 "dhgroup": "ffdhe2048" 00:20:16.868 } 00:20:16.868 } 00:20:16.868 ]' 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.868 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.869 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.869 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.869 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.869 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.869 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.128 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:17.128 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.694 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.953 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.212 00:20:18.212 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.212 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.212 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.471 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.471 { 00:20:18.471 "cntlid": 63, 00:20:18.471 "qid": 0, 00:20:18.471 "state": "enabled", 00:20:18.471 "thread": "nvmf_tgt_poll_group_000", 00:20:18.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.471 "listen_address": { 00:20:18.471 "trtype": "TCP", 00:20:18.471 "adrfam": "IPv4", 00:20:18.472 "traddr": "10.0.0.2", 00:20:18.472 "trsvcid": "4420" 00:20:18.472 }, 00:20:18.472 "peer_address": { 00:20:18.472 "trtype": "TCP", 00:20:18.472 "adrfam": "IPv4", 00:20:18.472 "traddr": "10.0.0.1", 00:20:18.472 "trsvcid": "60712" 00:20:18.472 }, 00:20:18.472 "auth": { 00:20:18.472 "state": "completed", 00:20:18.472 "digest": "sha384", 00:20:18.472 "dhgroup": "ffdhe2048" 00:20:18.472 } 00:20:18.472 } 00:20:18.472 ]' 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.472 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.730 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:18.731 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.298 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.558 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.817 00:20:19.817 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.817 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.817 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.075 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.075 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.075 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.075 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.076 { 00:20:20.076 "cntlid": 65, 00:20:20.076 "qid": 0, 00:20:20.076 "state": "enabled", 00:20:20.076 "thread": "nvmf_tgt_poll_group_000", 00:20:20.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.076 "listen_address": { 00:20:20.076 "trtype": "TCP", 00:20:20.076 "adrfam": "IPv4", 00:20:20.076 "traddr": "10.0.0.2", 00:20:20.076 "trsvcid": "4420" 00:20:20.076 }, 00:20:20.076 "peer_address": { 00:20:20.076 "trtype": "TCP", 00:20:20.076 "adrfam": "IPv4", 00:20:20.076 "traddr": "10.0.0.1", 00:20:20.076 "trsvcid": "42854" 00:20:20.076 }, 00:20:20.076 "auth": { 00:20:20.076 "state": "completed", 00:20:20.076 "digest": "sha384", 00:20:20.076 "dhgroup": "ffdhe3072" 00:20:20.076 } 00:20:20.076 } 00:20:20.076 ]' 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.076 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.334 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:20.334 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.902 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.161 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.420 00:20:21.420 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.420 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.420 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.678 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.678 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.678 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.679 { 00:20:21.679 "cntlid": 67, 00:20:21.679 "qid": 0, 00:20:21.679 "state": "enabled", 00:20:21.679 "thread": "nvmf_tgt_poll_group_000", 00:20:21.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.679 "listen_address": { 00:20:21.679 "trtype": "TCP", 00:20:21.679 "adrfam": "IPv4", 00:20:21.679 "traddr": "10.0.0.2", 00:20:21.679 "trsvcid": "4420" 00:20:21.679 }, 00:20:21.679 "peer_address": { 00:20:21.679 "trtype": "TCP", 00:20:21.679 "adrfam": "IPv4", 00:20:21.679 "traddr": "10.0.0.1", 00:20:21.679 "trsvcid": "42882" 00:20:21.679 }, 00:20:21.679 "auth": { 00:20:21.679 "state": "completed", 00:20:21.679 "digest": "sha384", 00:20:21.679 "dhgroup": "ffdhe3072" 00:20:21.679 } 00:20:21.679 } 00:20:21.679 ]' 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.679 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.937 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:21.938 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.505 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.764 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.765 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.023 00:20:23.023 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.023 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.023 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.282 { 00:20:23.282 "cntlid": 69, 00:20:23.282 "qid": 0, 00:20:23.282 "state": "enabled", 00:20:23.282 "thread": "nvmf_tgt_poll_group_000", 00:20:23.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.282 "listen_address": { 00:20:23.282 "trtype": "TCP", 00:20:23.282 "adrfam": "IPv4", 00:20:23.282 "traddr": "10.0.0.2", 00:20:23.282 "trsvcid": "4420" 00:20:23.282 }, 00:20:23.282 "peer_address": { 00:20:23.282 "trtype": "TCP", 00:20:23.282 "adrfam": "IPv4", 00:20:23.282 "traddr": "10.0.0.1", 00:20:23.282 "trsvcid": "42910" 00:20:23.282 }, 00:20:23.282 "auth": { 00:20:23.282 "state": "completed", 00:20:23.282 "digest": "sha384", 00:20:23.282 "dhgroup": "ffdhe3072" 00:20:23.282 } 00:20:23.282 } 00:20:23.282 ]' 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.282 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.282 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.282 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.282 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.282 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.282 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.541 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:23.541 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.109 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.110 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.367 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.625 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.625 { 00:20:24.625 "cntlid": 71, 00:20:24.625 "qid": 0, 00:20:24.625 "state": "enabled", 00:20:24.625 "thread": "nvmf_tgt_poll_group_000", 00:20:24.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.625 "listen_address": { 00:20:24.625 "trtype": "TCP", 00:20:24.625 "adrfam": "IPv4", 00:20:24.625 "traddr": "10.0.0.2", 00:20:24.625 "trsvcid": "4420" 00:20:24.625 }, 00:20:24.625 "peer_address": { 00:20:24.625 "trtype": "TCP", 00:20:24.625 "adrfam": "IPv4", 00:20:24.625 "traddr": "10.0.0.1", 00:20:24.625 "trsvcid": "42936" 00:20:24.625 }, 00:20:24.625 "auth": { 00:20:24.625 "state": "completed", 00:20:24.625 "digest": "sha384", 00:20:24.625 "dhgroup": "ffdhe3072" 00:20:24.625 } 00:20:24.625 } 00:20:24.625 ]' 00:20:24.625 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.883 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.141 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:25.141 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.709 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.968 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.227 00:20:26.227 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.227 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.227 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.227 { 00:20:26.227 "cntlid": 73, 00:20:26.227 "qid": 0, 00:20:26.227 "state": "enabled", 00:20:26.227 "thread": "nvmf_tgt_poll_group_000", 00:20:26.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.227 "listen_address": { 00:20:26.227 "trtype": "TCP", 00:20:26.227 "adrfam": "IPv4", 00:20:26.227 "traddr": "10.0.0.2", 00:20:26.227 "trsvcid": "4420" 00:20:26.227 }, 00:20:26.227 "peer_address": { 00:20:26.227 "trtype": "TCP", 00:20:26.227 "adrfam": "IPv4", 00:20:26.227 "traddr": "10.0.0.1", 00:20:26.227 "trsvcid": "42970" 00:20:26.227 }, 00:20:26.227 "auth": { 00:20:26.227 "state": "completed", 00:20:26.227 "digest": "sha384", 00:20:26.227 "dhgroup": "ffdhe4096" 00:20:26.227 } 00:20:26.227 } 00:20:26.227 ]' 00:20:26.227 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.486 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.745 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:26.745 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.312 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.312 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.313 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.313 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.313 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.571 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.829 { 00:20:27.829 "cntlid": 75, 00:20:27.829 "qid": 0, 00:20:27.829 "state": "enabled", 00:20:27.829 "thread": "nvmf_tgt_poll_group_000", 00:20:27.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.829 "listen_address": { 00:20:27.829 "trtype": "TCP", 00:20:27.829 "adrfam": "IPv4", 00:20:27.829 "traddr": "10.0.0.2", 00:20:27.829 "trsvcid": "4420" 00:20:27.829 }, 00:20:27.829 "peer_address": { 00:20:27.829 "trtype": "TCP", 00:20:27.829 "adrfam": "IPv4", 00:20:27.829 "traddr": "10.0.0.1", 00:20:27.829 "trsvcid": "42992" 00:20:27.829 }, 00:20:27.829 "auth": { 00:20:27.829 "state": "completed", 00:20:27.829 "digest": "sha384", 00:20:27.829 "dhgroup": "ffdhe4096" 00:20:27.829 } 00:20:27.829 } 00:20:27.829 ]' 00:20:27.829 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.091 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.350 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:28.350 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.918 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.185 00:20:29.185 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.185 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.185 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.445 { 00:20:29.445 "cntlid": 77, 00:20:29.445 "qid": 0, 00:20:29.445 "state": "enabled", 00:20:29.445 "thread": "nvmf_tgt_poll_group_000", 00:20:29.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.445 "listen_address": { 00:20:29.445 "trtype": "TCP", 00:20:29.445 "adrfam": "IPv4", 00:20:29.445 "traddr": "10.0.0.2", 00:20:29.445 "trsvcid": "4420" 00:20:29.445 }, 00:20:29.445 "peer_address": { 00:20:29.445 "trtype": "TCP", 00:20:29.445 "adrfam": "IPv4", 00:20:29.445 "traddr": "10.0.0.1", 00:20:29.445 "trsvcid": "38784" 00:20:29.445 }, 00:20:29.445 "auth": { 00:20:29.445 "state": "completed", 00:20:29.445 "digest": "sha384", 00:20:29.445 "dhgroup": "ffdhe4096" 00:20:29.445 } 00:20:29.445 } 00:20:29.445 ]' 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.445 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:29.703 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:30.270 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.270 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.270 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.270 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.528 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.786 00:20:30.786 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.786 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.786 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.045 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.045 { 00:20:31.045 "cntlid": 79, 00:20:31.045 "qid": 0, 00:20:31.045 "state": "enabled", 00:20:31.045 "thread": "nvmf_tgt_poll_group_000", 00:20:31.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.045 "listen_address": { 00:20:31.045 "trtype": "TCP", 00:20:31.046 "adrfam": "IPv4", 00:20:31.046 "traddr": "10.0.0.2", 00:20:31.046 "trsvcid": "4420" 00:20:31.046 }, 00:20:31.046 "peer_address": { 00:20:31.046 "trtype": "TCP", 00:20:31.046 "adrfam": "IPv4", 00:20:31.046 "traddr": "10.0.0.1", 00:20:31.046 "trsvcid": "38820" 00:20:31.046 }, 00:20:31.046 "auth": { 00:20:31.046 "state": "completed", 00:20:31.046 "digest": "sha384", 00:20:31.046 "dhgroup": "ffdhe4096" 00:20:31.046 } 00:20:31.046 } 00:20:31.046 ]' 00:20:31.046 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.046 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.046 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.304 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.304 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.304 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.304 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.304 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.562 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:31.562 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.129 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.695 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.695 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.695 { 00:20:32.695 "cntlid": 81, 00:20:32.695 "qid": 0, 00:20:32.695 "state": "enabled", 00:20:32.695 "thread": "nvmf_tgt_poll_group_000", 00:20:32.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.696 "listen_address": { 00:20:32.696 "trtype": "TCP", 00:20:32.696 "adrfam": "IPv4", 00:20:32.696 "traddr": "10.0.0.2", 00:20:32.696 "trsvcid": "4420" 00:20:32.696 }, 00:20:32.696 "peer_address": { 00:20:32.696 "trtype": "TCP", 00:20:32.696 "adrfam": "IPv4", 00:20:32.696 "traddr": "10.0.0.1", 00:20:32.696 "trsvcid": "38854" 00:20:32.696 }, 00:20:32.696 "auth": { 00:20:32.696 "state": "completed", 00:20:32.696 "digest": "sha384", 00:20:32.696 "dhgroup": "ffdhe6144" 00:20:32.696 } 00:20:32.696 } 00:20:32.696 ]' 00:20:32.696 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.954 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.954 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.954 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.954 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.955 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.955 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.955 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.214 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:33.214 22:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.781 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.782 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.349 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.349 { 00:20:34.349 "cntlid": 83, 00:20:34.349 "qid": 0, 00:20:34.349 "state": "enabled", 00:20:34.349 "thread": "nvmf_tgt_poll_group_000", 00:20:34.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.349 "listen_address": { 00:20:34.349 "trtype": "TCP", 00:20:34.349 "adrfam": "IPv4", 00:20:34.349 "traddr": "10.0.0.2", 00:20:34.349 "trsvcid": "4420" 00:20:34.349 }, 00:20:34.349 "peer_address": { 00:20:34.349 "trtype": "TCP", 00:20:34.349 "adrfam": "IPv4", 00:20:34.349 "traddr": "10.0.0.1", 00:20:34.349 "trsvcid": "38874" 00:20:34.349 }, 00:20:34.349 "auth": { 00:20:34.349 "state": "completed", 00:20:34.349 "digest": "sha384", 00:20:34.349 "dhgroup": "ffdhe6144" 00:20:34.349 } 00:20:34.349 } 00:20:34.349 ]' 00:20:34.349 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.608 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.866 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:34.866 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.433 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.693 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.952 00:20:35.952 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.952 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.952 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.212 { 00:20:36.212 "cntlid": 85, 00:20:36.212 "qid": 0, 00:20:36.212 "state": "enabled", 00:20:36.212 "thread": "nvmf_tgt_poll_group_000", 00:20:36.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.212 "listen_address": { 00:20:36.212 "trtype": "TCP", 00:20:36.212 "adrfam": "IPv4", 00:20:36.212 "traddr": "10.0.0.2", 00:20:36.212 "trsvcid": "4420" 00:20:36.212 }, 00:20:36.212 "peer_address": { 00:20:36.212 "trtype": "TCP", 00:20:36.212 "adrfam": "IPv4", 00:20:36.212 "traddr": "10.0.0.1", 00:20:36.212 "trsvcid": "38912" 00:20:36.212 }, 00:20:36.212 "auth": { 00:20:36.212 "state": "completed", 00:20:36.212 "digest": "sha384", 00:20:36.212 "dhgroup": "ffdhe6144" 00:20:36.212 } 00:20:36.212 } 00:20:36.212 ]' 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.212 22:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.212 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.212 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.212 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.471 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:36.471 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.038 22:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.298 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.557 00:20:37.557 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.557 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.557 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.816 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.817 { 00:20:37.817 "cntlid": 87, 00:20:37.817 "qid": 0, 00:20:37.817 "state": "enabled", 00:20:37.817 "thread": "nvmf_tgt_poll_group_000", 00:20:37.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.817 "listen_address": { 00:20:37.817 "trtype": "TCP", 00:20:37.817 "adrfam": "IPv4", 00:20:37.817 "traddr": "10.0.0.2", 00:20:37.817 "trsvcid": "4420" 00:20:37.817 }, 00:20:37.817 "peer_address": { 00:20:37.817 "trtype": "TCP", 00:20:37.817 "adrfam": "IPv4", 00:20:37.817 "traddr": "10.0.0.1", 00:20:37.817 "trsvcid": "38946" 00:20:37.817 }, 00:20:37.817 "auth": { 00:20:37.817 "state": "completed", 00:20:37.817 "digest": "sha384", 00:20:37.817 "dhgroup": "ffdhe6144" 00:20:37.817 } 00:20:37.817 } 00:20:37.817 ]' 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.817 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.076 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:38.076 22:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:38.643 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.644 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.903 22:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.472 00:20:39.472 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.472 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.472 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.731 { 00:20:39.731 "cntlid": 89, 00:20:39.731 "qid": 0, 00:20:39.731 "state": "enabled", 00:20:39.731 "thread": "nvmf_tgt_poll_group_000", 00:20:39.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.731 "listen_address": { 00:20:39.731 "trtype": "TCP", 00:20:39.731 "adrfam": "IPv4", 00:20:39.731 "traddr": "10.0.0.2", 00:20:39.731 "trsvcid": "4420" 00:20:39.731 }, 00:20:39.731 "peer_address": { 00:20:39.731 "trtype": "TCP", 00:20:39.731 "adrfam": "IPv4", 00:20:39.731 "traddr": "10.0.0.1", 00:20:39.731 "trsvcid": "55542" 00:20:39.731 }, 00:20:39.731 "auth": { 00:20:39.731 "state": "completed", 00:20:39.731 "digest": "sha384", 00:20:39.731 "dhgroup": "ffdhe8192" 00:20:39.731 } 00:20:39.731 } 00:20:39.731 ]' 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.731 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.989 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:39.989 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.556 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.815 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.073 00:20:41.073 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.073 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.073 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.332 { 00:20:41.332 "cntlid": 91, 00:20:41.332 "qid": 0, 00:20:41.332 "state": "enabled", 00:20:41.332 "thread": "nvmf_tgt_poll_group_000", 00:20:41.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.332 "listen_address": { 00:20:41.332 "trtype": "TCP", 00:20:41.332 "adrfam": "IPv4", 00:20:41.332 "traddr": "10.0.0.2", 00:20:41.332 "trsvcid": "4420" 00:20:41.332 }, 00:20:41.332 "peer_address": { 00:20:41.332 "trtype": "TCP", 00:20:41.332 "adrfam": "IPv4", 00:20:41.332 "traddr": "10.0.0.1", 00:20:41.332 "trsvcid": "55564" 00:20:41.332 }, 00:20:41.332 "auth": { 00:20:41.332 "state": "completed", 00:20:41.332 "digest": "sha384", 00:20:41.332 "dhgroup": "ffdhe8192" 00:20:41.332 } 00:20:41.332 } 00:20:41.332 ]' 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.332 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.591 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.591 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.591 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.592 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.592 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.850 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:41.850 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.417 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.984 00:20:42.984 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.984 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.984 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.242 { 00:20:43.242 "cntlid": 93, 00:20:43.242 "qid": 0, 00:20:43.242 "state": "enabled", 00:20:43.242 "thread": "nvmf_tgt_poll_group_000", 00:20:43.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.242 "listen_address": { 00:20:43.242 "trtype": "TCP", 00:20:43.242 "adrfam": "IPv4", 00:20:43.242 "traddr": "10.0.0.2", 00:20:43.242 "trsvcid": "4420" 00:20:43.242 }, 00:20:43.242 "peer_address": { 00:20:43.242 "trtype": "TCP", 00:20:43.242 "adrfam": "IPv4", 00:20:43.242 "traddr": "10.0.0.1", 00:20:43.242 "trsvcid": "55592" 00:20:43.242 }, 00:20:43.242 "auth": { 00:20:43.242 "state": "completed", 00:20:43.242 "digest": "sha384", 00:20:43.242 "dhgroup": "ffdhe8192" 00:20:43.242 } 00:20:43.242 } 00:20:43.242 ]' 00:20:43.242 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.242 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.501 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:43.501 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.069 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.329 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.330 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.898 00:20:44.898 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.898 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.899 { 00:20:44.899 "cntlid": 95, 00:20:44.899 "qid": 0, 00:20:44.899 "state": "enabled", 00:20:44.899 "thread": "nvmf_tgt_poll_group_000", 00:20:44.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.899 "listen_address": { 00:20:44.899 "trtype": "TCP", 00:20:44.899 "adrfam": "IPv4", 00:20:44.899 "traddr": "10.0.0.2", 00:20:44.899 "trsvcid": "4420" 00:20:44.899 }, 00:20:44.899 "peer_address": { 00:20:44.899 "trtype": "TCP", 00:20:44.899 "adrfam": "IPv4", 00:20:44.899 "traddr": "10.0.0.1", 00:20:44.899 "trsvcid": "55618" 00:20:44.899 }, 00:20:44.899 "auth": { 00:20:44.899 "state": "completed", 00:20:44.899 "digest": "sha384", 00:20:44.899 "dhgroup": "ffdhe8192" 00:20:44.899 } 00:20:44.899 } 00:20:44.899 ]' 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.899 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.158 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.158 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.158 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.158 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.158 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.416 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:45.417 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.984 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.243 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.243 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.243 22:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.243 00:20:46.243 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.243 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.243 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.502 { 00:20:46.502 "cntlid": 97, 00:20:46.502 "qid": 0, 00:20:46.502 "state": "enabled", 00:20:46.502 "thread": "nvmf_tgt_poll_group_000", 00:20:46.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.502 "listen_address": { 00:20:46.502 "trtype": "TCP", 00:20:46.502 "adrfam": "IPv4", 00:20:46.502 "traddr": "10.0.0.2", 00:20:46.502 "trsvcid": "4420" 00:20:46.502 }, 00:20:46.502 "peer_address": { 00:20:46.502 "trtype": "TCP", 00:20:46.502 "adrfam": "IPv4", 00:20:46.502 "traddr": "10.0.0.1", 00:20:46.502 "trsvcid": "55630" 00:20:46.502 }, 00:20:46.502 "auth": { 00:20:46.502 "state": "completed", 00:20:46.502 "digest": "sha512", 00:20:46.502 "dhgroup": "null" 00:20:46.502 } 00:20:46.502 } 00:20:46.502 ]' 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.502 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:46.761 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.330 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.589 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.848 00:20:47.848 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.848 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.848 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.107 { 00:20:48.107 "cntlid": 99, 00:20:48.107 "qid": 0, 00:20:48.107 "state": "enabled", 00:20:48.107 "thread": "nvmf_tgt_poll_group_000", 00:20:48.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.107 "listen_address": { 00:20:48.107 "trtype": "TCP", 00:20:48.107 "adrfam": "IPv4", 00:20:48.107 "traddr": "10.0.0.2", 00:20:48.107 "trsvcid": "4420" 00:20:48.107 }, 00:20:48.107 "peer_address": { 00:20:48.107 "trtype": "TCP", 00:20:48.107 "adrfam": "IPv4", 00:20:48.107 "traddr": "10.0.0.1", 00:20:48.107 "trsvcid": "55668" 00:20:48.107 }, 00:20:48.107 "auth": { 00:20:48.107 "state": "completed", 00:20:48.107 "digest": "sha512", 00:20:48.107 "dhgroup": "null" 00:20:48.107 } 00:20:48.107 } 00:20:48.107 ]' 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.107 22:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.366 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:48.366 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:48.933 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.934 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.192 22:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.451 00:20:49.451 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.451 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.451 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.605 { 00:20:50.605 "cntlid": 101, 00:20:50.605 "qid": 0, 00:20:50.605 "state": "enabled", 00:20:50.605 "thread": "nvmf_tgt_poll_group_000", 00:20:50.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.605 "listen_address": { 00:20:50.605 "trtype": "TCP", 00:20:50.605 "adrfam": "IPv4", 00:20:50.605 "traddr": "10.0.0.2", 00:20:50.605 "trsvcid": "4420" 00:20:50.605 }, 00:20:50.605 "peer_address": { 00:20:50.605 "trtype": "TCP", 00:20:50.605 "adrfam": "IPv4", 00:20:50.605 "traddr": "10.0.0.1", 00:20:50.605 "trsvcid": "41508" 00:20:50.605 }, 00:20:50.605 "auth": { 00:20:50.605 "state": "completed", 00:20:50.605 "digest": "sha512", 00:20:50.605 "dhgroup": "null" 00:20:50.605 } 00:20:50.605 } 00:20:50.605 ]' 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.605 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.606 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.606 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:50.606 22:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.606 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.864 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.122 00:20:51.122 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.122 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.122 22:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.381 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.381 { 00:20:51.381 "cntlid": 103, 00:20:51.381 "qid": 0, 00:20:51.381 "state": "enabled", 00:20:51.381 "thread": "nvmf_tgt_poll_group_000", 00:20:51.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.381 "listen_address": { 00:20:51.381 "trtype": "TCP", 00:20:51.382 "adrfam": "IPv4", 00:20:51.382 "traddr": "10.0.0.2", 00:20:51.382 "trsvcid": "4420" 00:20:51.382 }, 00:20:51.382 "peer_address": { 00:20:51.382 "trtype": "TCP", 00:20:51.382 "adrfam": "IPv4", 00:20:51.382 "traddr": "10.0.0.1", 00:20:51.382 "trsvcid": "41534" 00:20:51.382 }, 00:20:51.382 "auth": { 00:20:51.382 "state": "completed", 00:20:51.382 "digest": "sha512", 00:20:51.382 "dhgroup": "null" 00:20:51.382 } 00:20:51.382 } 00:20:51.382 ]' 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.382 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.640 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:51.640 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.212 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.471 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.730 00:20:52.730 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.730 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.730 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.989 { 00:20:52.989 "cntlid": 105, 00:20:52.989 "qid": 0, 00:20:52.989 "state": "enabled", 00:20:52.989 "thread": "nvmf_tgt_poll_group_000", 00:20:52.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.989 "listen_address": { 00:20:52.989 "trtype": "TCP", 00:20:52.989 "adrfam": "IPv4", 00:20:52.989 "traddr": "10.0.0.2", 00:20:52.989 "trsvcid": "4420" 00:20:52.989 }, 00:20:52.989 "peer_address": { 00:20:52.989 "trtype": "TCP", 00:20:52.989 "adrfam": "IPv4", 00:20:52.989 "traddr": "10.0.0.1", 00:20:52.989 "trsvcid": "41566" 00:20:52.989 }, 00:20:52.989 "auth": { 00:20:52.989 "state": "completed", 00:20:52.989 "digest": "sha512", 00:20:52.989 "dhgroup": "ffdhe2048" 00:20:52.989 } 00:20:52.989 } 00:20:52.989 ]' 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.989 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.248 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:53.248 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.816 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.075 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:54.075 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.075 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.075 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.075 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.076 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.335 00:20:54.335 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.335 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.335 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.335 { 00:20:54.335 "cntlid": 107, 00:20:54.335 "qid": 0, 00:20:54.335 "state": "enabled", 00:20:54.335 "thread": "nvmf_tgt_poll_group_000", 00:20:54.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.335 "listen_address": { 00:20:54.335 "trtype": "TCP", 00:20:54.335 "adrfam": "IPv4", 00:20:54.335 "traddr": "10.0.0.2", 00:20:54.335 "trsvcid": "4420" 00:20:54.335 }, 00:20:54.335 "peer_address": { 00:20:54.335 "trtype": "TCP", 00:20:54.335 "adrfam": "IPv4", 00:20:54.335 "traddr": "10.0.0.1", 00:20:54.335 "trsvcid": "41586" 00:20:54.335 }, 00:20:54.335 "auth": { 00:20:54.335 "state": "completed", 00:20:54.335 "digest": "sha512", 00:20:54.335 "dhgroup": "ffdhe2048" 00:20:54.335 } 00:20:54.335 } 00:20:54.335 ]' 00:20:54.335 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.593 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.851 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:54.851 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.418 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.676 00:20:55.676 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.676 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.676 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.934 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.934 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.934 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.934 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.935 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.935 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.935 { 00:20:55.935 "cntlid": 109, 00:20:55.935 "qid": 0, 00:20:55.935 "state": "enabled", 00:20:55.935 "thread": "nvmf_tgt_poll_group_000", 00:20:55.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.935 "listen_address": { 00:20:55.935 "trtype": "TCP", 00:20:55.935 "adrfam": "IPv4", 00:20:55.935 "traddr": "10.0.0.2", 00:20:55.935 "trsvcid": "4420" 00:20:55.935 }, 00:20:55.935 "peer_address": { 00:20:55.935 "trtype": "TCP", 00:20:55.935 "adrfam": "IPv4", 00:20:55.935 "traddr": "10.0.0.1", 00:20:55.935 "trsvcid": "41620" 00:20:55.935 }, 00:20:55.935 "auth": { 00:20:55.935 "state": "completed", 00:20:55.935 "digest": "sha512", 00:20:55.935 "dhgroup": "ffdhe2048" 00:20:55.935 } 00:20:55.935 } 00:20:55.935 ]' 00:20:55.935 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.193 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.452 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:56.452 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.019 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.278 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.279 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.279 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.538 { 00:20:57.538 "cntlid": 111, 00:20:57.538 "qid": 0, 00:20:57.538 "state": "enabled", 00:20:57.538 "thread": "nvmf_tgt_poll_group_000", 00:20:57.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.538 "listen_address": { 00:20:57.538 "trtype": "TCP", 00:20:57.538 "adrfam": "IPv4", 00:20:57.538 "traddr": "10.0.0.2", 00:20:57.538 "trsvcid": "4420" 00:20:57.538 }, 00:20:57.538 "peer_address": { 00:20:57.538 "trtype": "TCP", 00:20:57.538 "adrfam": "IPv4", 00:20:57.538 "traddr": "10.0.0.1", 00:20:57.538 "trsvcid": "41652" 00:20:57.538 }, 00:20:57.538 "auth": { 00:20:57.538 "state": "completed", 00:20:57.538 "digest": "sha512", 00:20:57.538 "dhgroup": "ffdhe2048" 00:20:57.538 } 00:20:57.538 } 00:20:57.538 ]' 00:20:57.538 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.797 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.056 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:58.056 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.623 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.882 00:20:58.882 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.882 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.882 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.141 { 00:20:59.141 "cntlid": 113, 00:20:59.141 "qid": 0, 00:20:59.141 "state": "enabled", 00:20:59.141 "thread": "nvmf_tgt_poll_group_000", 00:20:59.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.141 "listen_address": { 00:20:59.141 "trtype": "TCP", 00:20:59.141 "adrfam": "IPv4", 00:20:59.141 "traddr": "10.0.0.2", 00:20:59.141 "trsvcid": "4420" 00:20:59.141 }, 00:20:59.141 "peer_address": { 00:20:59.141 "trtype": "TCP", 00:20:59.141 "adrfam": "IPv4", 00:20:59.141 "traddr": "10.0.0.1", 00:20:59.141 "trsvcid": "58116" 00:20:59.141 }, 00:20:59.141 "auth": { 00:20:59.141 "state": "completed", 00:20:59.141 "digest": "sha512", 00:20:59.141 "dhgroup": "ffdhe3072" 00:20:59.141 } 00:20:59.141 } 00:20:59.141 ]' 00:20:59.141 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.141 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.141 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.400 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.400 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.400 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.400 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.400 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.659 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:20:59.659 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.227 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.227 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.486 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.486 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.486 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.745 { 00:21:00.745 "cntlid": 115, 00:21:00.745 "qid": 0, 00:21:00.745 "state": "enabled", 00:21:00.745 "thread": "nvmf_tgt_poll_group_000", 00:21:00.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.745 "listen_address": { 00:21:00.745 "trtype": "TCP", 00:21:00.745 "adrfam": "IPv4", 00:21:00.745 "traddr": "10.0.0.2", 00:21:00.745 "trsvcid": "4420" 00:21:00.745 }, 00:21:00.745 "peer_address": { 00:21:00.745 "trtype": "TCP", 00:21:00.745 "adrfam": "IPv4", 00:21:00.745 "traddr": "10.0.0.1", 00:21:00.745 "trsvcid": "58140" 00:21:00.745 }, 00:21:00.745 "auth": { 00:21:00.745 "state": "completed", 00:21:00.745 "digest": "sha512", 00:21:00.745 "dhgroup": "ffdhe3072" 00:21:00.745 } 00:21:00.745 } 00:21:00.745 ]' 00:21:00.745 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.004 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.264 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:01.264 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:01.831 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.831 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.832 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.091 00:21:02.349 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.350 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.350 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.350 { 00:21:02.350 "cntlid": 117, 00:21:02.350 "qid": 0, 00:21:02.350 "state": "enabled", 00:21:02.350 "thread": "nvmf_tgt_poll_group_000", 00:21:02.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.350 "listen_address": { 00:21:02.350 "trtype": "TCP", 00:21:02.350 "adrfam": "IPv4", 00:21:02.350 "traddr": "10.0.0.2", 00:21:02.350 "trsvcid": "4420" 00:21:02.350 }, 00:21:02.350 "peer_address": { 00:21:02.350 "trtype": "TCP", 00:21:02.350 "adrfam": "IPv4", 00:21:02.350 "traddr": "10.0.0.1", 00:21:02.350 "trsvcid": "58154" 00:21:02.350 }, 00:21:02.350 "auth": { 00:21:02.350 "state": "completed", 00:21:02.350 "digest": "sha512", 00:21:02.350 "dhgroup": "ffdhe3072" 00:21:02.350 } 00:21:02.350 } 00:21:02.350 ]' 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.350 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.608 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.608 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.608 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.608 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.608 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.609 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.867 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:02.867 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.436 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.695 00:21:03.695 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.695 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.695 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.954 { 00:21:03.954 "cntlid": 119, 00:21:03.954 "qid": 0, 00:21:03.954 "state": "enabled", 00:21:03.954 "thread": "nvmf_tgt_poll_group_000", 00:21:03.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.954 "listen_address": { 00:21:03.954 "trtype": "TCP", 00:21:03.954 "adrfam": "IPv4", 00:21:03.954 "traddr": "10.0.0.2", 00:21:03.954 "trsvcid": "4420" 00:21:03.954 }, 00:21:03.954 "peer_address": { 00:21:03.954 "trtype": "TCP", 00:21:03.954 "adrfam": "IPv4", 00:21:03.954 "traddr": "10.0.0.1", 00:21:03.954 "trsvcid": "58172" 00:21:03.954 }, 00:21:03.954 "auth": { 00:21:03.954 "state": "completed", 00:21:03.954 "digest": "sha512", 00:21:03.954 "dhgroup": "ffdhe3072" 00:21:03.954 } 00:21:03.954 } 00:21:03.954 ]' 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.954 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.213 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.213 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.213 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.213 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.213 22:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.471 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:04.471 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.038 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.039 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.297 00:21:05.297 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.297 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.297 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.556 { 00:21:05.556 "cntlid": 121, 00:21:05.556 "qid": 0, 00:21:05.556 "state": "enabled", 00:21:05.556 "thread": "nvmf_tgt_poll_group_000", 00:21:05.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.556 "listen_address": { 00:21:05.556 "trtype": "TCP", 00:21:05.556 "adrfam": "IPv4", 00:21:05.556 "traddr": "10.0.0.2", 00:21:05.556 "trsvcid": "4420" 00:21:05.556 }, 00:21:05.556 "peer_address": { 00:21:05.556 "trtype": "TCP", 00:21:05.556 "adrfam": "IPv4", 00:21:05.556 "traddr": "10.0.0.1", 00:21:05.556 "trsvcid": "58194" 00:21:05.556 }, 00:21:05.556 "auth": { 00:21:05.556 "state": "completed", 00:21:05.556 "digest": "sha512", 00:21:05.556 "dhgroup": "ffdhe4096" 00:21:05.556 } 00:21:05.556 } 00:21:05.556 ]' 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.556 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:05.815 22:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.382 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.641 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.900 00:21:06.900 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.900 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.900 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.159 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.159 { 00:21:07.159 "cntlid": 123, 00:21:07.159 "qid": 0, 00:21:07.159 "state": "enabled", 00:21:07.159 "thread": "nvmf_tgt_poll_group_000", 00:21:07.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.159 "listen_address": { 00:21:07.159 "trtype": "TCP", 00:21:07.159 "adrfam": "IPv4", 00:21:07.159 "traddr": "10.0.0.2", 00:21:07.159 "trsvcid": "4420" 00:21:07.159 }, 00:21:07.159 "peer_address": { 00:21:07.159 "trtype": "TCP", 00:21:07.159 "adrfam": "IPv4", 00:21:07.159 "traddr": "10.0.0.1", 00:21:07.159 "trsvcid": "58226" 00:21:07.159 }, 00:21:07.159 "auth": { 00:21:07.159 "state": "completed", 00:21:07.160 "digest": "sha512", 00:21:07.160 "dhgroup": "ffdhe4096" 00:21:07.160 } 00:21:07.160 } 00:21:07.160 ]' 00:21:07.160 22:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.160 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.160 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.160 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.160 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.418 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.418 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.418 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.418 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:07.418 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.985 22:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.244 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.245 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.503 00:21:08.503 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.503 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.503 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.762 { 00:21:08.762 "cntlid": 125, 00:21:08.762 "qid": 0, 00:21:08.762 "state": "enabled", 00:21:08.762 "thread": "nvmf_tgt_poll_group_000", 00:21:08.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.762 "listen_address": { 00:21:08.762 "trtype": "TCP", 00:21:08.762 "adrfam": "IPv4", 00:21:08.762 "traddr": "10.0.0.2", 00:21:08.762 "trsvcid": "4420" 00:21:08.762 }, 00:21:08.762 "peer_address": { 00:21:08.762 "trtype": "TCP", 00:21:08.762 "adrfam": "IPv4", 00:21:08.762 "traddr": "10.0.0.1", 00:21:08.762 "trsvcid": "58256" 00:21:08.762 }, 00:21:08.762 "auth": { 00:21:08.762 "state": "completed", 00:21:08.762 "digest": "sha512", 00:21:08.762 "dhgroup": "ffdhe4096" 00:21:08.762 } 00:21:08.762 } 00:21:08.762 ]' 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.762 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.021 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.021 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.021 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.021 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:09.021 22:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.588 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.847 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.106 00:21:10.106 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.106 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.106 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.364 { 00:21:10.364 "cntlid": 127, 00:21:10.364 "qid": 0, 00:21:10.364 "state": "enabled", 00:21:10.364 "thread": "nvmf_tgt_poll_group_000", 00:21:10.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.364 "listen_address": { 00:21:10.364 "trtype": "TCP", 00:21:10.364 "adrfam": "IPv4", 00:21:10.364 "traddr": "10.0.0.2", 00:21:10.364 "trsvcid": "4420" 00:21:10.364 }, 00:21:10.364 "peer_address": { 00:21:10.364 "trtype": "TCP", 00:21:10.364 "adrfam": "IPv4", 00:21:10.364 "traddr": "10.0.0.1", 00:21:10.364 "trsvcid": "58030" 00:21:10.364 }, 00:21:10.364 "auth": { 00:21:10.364 "state": "completed", 00:21:10.364 "digest": "sha512", 00:21:10.364 "dhgroup": "ffdhe4096" 00:21:10.364 } 00:21:10.364 } 00:21:10.364 ]' 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.364 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.623 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.623 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.623 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.623 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:10.623 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.191 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.450 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.017 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.017 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.017 { 00:21:12.018 "cntlid": 129, 00:21:12.018 "qid": 0, 00:21:12.018 "state": "enabled", 00:21:12.018 "thread": "nvmf_tgt_poll_group_000", 00:21:12.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.018 "listen_address": { 00:21:12.018 "trtype": "TCP", 00:21:12.018 "adrfam": "IPv4", 00:21:12.018 "traddr": "10.0.0.2", 00:21:12.018 "trsvcid": "4420" 00:21:12.018 }, 00:21:12.018 "peer_address": { 00:21:12.018 "trtype": "TCP", 00:21:12.018 "adrfam": "IPv4", 00:21:12.018 "traddr": "10.0.0.1", 00:21:12.018 "trsvcid": "58056" 00:21:12.018 }, 00:21:12.018 "auth": { 00:21:12.018 "state": "completed", 00:21:12.018 "digest": "sha512", 00:21:12.018 "dhgroup": "ffdhe6144" 00:21:12.018 } 00:21:12.018 } 00:21:12.018 ]' 00:21:12.018 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.018 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.018 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.276 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.276 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.276 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.276 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.276 22:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.535 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:12.535 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.104 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.671 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.671 { 00:21:13.671 "cntlid": 131, 00:21:13.671 "qid": 0, 00:21:13.671 "state": "enabled", 00:21:13.671 "thread": "nvmf_tgt_poll_group_000", 00:21:13.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.671 "listen_address": { 00:21:13.671 "trtype": "TCP", 00:21:13.671 "adrfam": "IPv4", 00:21:13.671 "traddr": "10.0.0.2", 00:21:13.671 "trsvcid": "4420" 00:21:13.671 }, 00:21:13.671 "peer_address": { 00:21:13.671 "trtype": "TCP", 00:21:13.671 "adrfam": "IPv4", 00:21:13.671 "traddr": "10.0.0.1", 00:21:13.671 "trsvcid": "58090" 00:21:13.671 }, 00:21:13.671 "auth": { 00:21:13.671 "state": "completed", 00:21:13.671 "digest": "sha512", 00:21:13.671 "dhgroup": "ffdhe6144" 00:21:13.671 } 00:21:13.671 } 00:21:13.671 ]' 00:21:13.671 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.930 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.188 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:14.188 22:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.754 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.013 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.272 00:21:15.272 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.272 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.272 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.531 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.531 { 00:21:15.531 "cntlid": 133, 00:21:15.531 "qid": 0, 00:21:15.531 "state": "enabled", 00:21:15.531 "thread": "nvmf_tgt_poll_group_000", 00:21:15.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.531 "listen_address": { 00:21:15.531 "trtype": "TCP", 00:21:15.531 "adrfam": "IPv4", 00:21:15.531 "traddr": "10.0.0.2", 00:21:15.531 "trsvcid": "4420" 00:21:15.531 }, 00:21:15.531 "peer_address": { 00:21:15.531 "trtype": "TCP", 00:21:15.531 "adrfam": "IPv4", 00:21:15.531 "traddr": "10.0.0.1", 00:21:15.531 "trsvcid": "58110" 00:21:15.531 }, 00:21:15.531 "auth": { 00:21:15.531 "state": "completed", 00:21:15.531 "digest": "sha512", 00:21:15.531 "dhgroup": "ffdhe6144" 00:21:15.531 } 00:21:15.531 } 00:21:15.531 ]' 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.532 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.790 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:15.791 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.358 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.359 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.618 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.876 00:21:16.876 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.876 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.876 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.135 { 00:21:17.135 "cntlid": 135, 00:21:17.135 "qid": 0, 00:21:17.135 "state": "enabled", 00:21:17.135 "thread": "nvmf_tgt_poll_group_000", 00:21:17.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.135 "listen_address": { 00:21:17.135 "trtype": "TCP", 00:21:17.135 "adrfam": "IPv4", 00:21:17.135 "traddr": "10.0.0.2", 00:21:17.135 "trsvcid": "4420" 00:21:17.135 }, 00:21:17.135 "peer_address": { 00:21:17.135 "trtype": "TCP", 00:21:17.135 "adrfam": "IPv4", 00:21:17.135 "traddr": "10.0.0.1", 00:21:17.135 "trsvcid": "58138" 00:21:17.135 }, 00:21:17.135 "auth": { 00:21:17.135 "state": "completed", 00:21:17.135 "digest": "sha512", 00:21:17.135 "dhgroup": "ffdhe6144" 00:21:17.135 } 00:21:17.135 } 00:21:17.135 ]' 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.135 22:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.394 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:17.394 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.961 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.220 22:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.788 00:21:18.788 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.788 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.788 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.046 { 00:21:19.046 "cntlid": 137, 00:21:19.046 "qid": 0, 00:21:19.046 "state": "enabled", 00:21:19.046 "thread": "nvmf_tgt_poll_group_000", 00:21:19.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.046 "listen_address": { 00:21:19.046 "trtype": "TCP", 00:21:19.046 "adrfam": "IPv4", 00:21:19.046 "traddr": "10.0.0.2", 00:21:19.046 "trsvcid": "4420" 00:21:19.046 }, 00:21:19.046 "peer_address": { 00:21:19.046 "trtype": "TCP", 00:21:19.046 "adrfam": "IPv4", 00:21:19.046 "traddr": "10.0.0.1", 00:21:19.046 "trsvcid": "58178" 00:21:19.046 }, 00:21:19.046 "auth": { 00:21:19.046 "state": "completed", 00:21:19.046 "digest": "sha512", 00:21:19.046 "dhgroup": "ffdhe8192" 00:21:19.046 } 00:21:19.046 } 00:21:19.046 ]' 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.046 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.047 22:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.305 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:19.305 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.871 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.129 22:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.696 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.696 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.696 { 00:21:20.696 "cntlid": 139, 00:21:20.696 "qid": 0, 00:21:20.696 "state": "enabled", 00:21:20.696 "thread": "nvmf_tgt_poll_group_000", 00:21:20.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.696 "listen_address": { 00:21:20.696 "trtype": "TCP", 00:21:20.696 "adrfam": "IPv4", 00:21:20.696 "traddr": "10.0.0.2", 00:21:20.696 "trsvcid": "4420" 00:21:20.696 }, 00:21:20.696 "peer_address": { 00:21:20.696 "trtype": "TCP", 00:21:20.696 "adrfam": "IPv4", 00:21:20.696 "traddr": "10.0.0.1", 00:21:20.696 "trsvcid": "55306" 00:21:20.696 }, 00:21:20.696 "auth": { 00:21:20.697 "state": "completed", 00:21:20.697 "digest": "sha512", 00:21:20.697 "dhgroup": "ffdhe8192" 00:21:20.697 } 00:21:20.697 } 00:21:20.697 ]' 00:21:20.697 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.956 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.215 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:21.215 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: --dhchap-ctrl-secret DHHC-1:02:ZmQ4ODAwZjU0Mjg0ZWQ4NjQyN2EwZWQ0ODdmZTc3MDQ3NTVlMzdkNjY3YzA4N2EzubQHtQ==: 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.782 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.041 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.041 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.300 00:21:22.300 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.300 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.300 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.558 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.559 { 00:21:22.559 "cntlid": 141, 00:21:22.559 "qid": 0, 00:21:22.559 "state": "enabled", 00:21:22.559 "thread": "nvmf_tgt_poll_group_000", 00:21:22.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.559 "listen_address": { 00:21:22.559 "trtype": "TCP", 00:21:22.559 "adrfam": "IPv4", 00:21:22.559 "traddr": "10.0.0.2", 00:21:22.559 "trsvcid": "4420" 00:21:22.559 }, 00:21:22.559 "peer_address": { 00:21:22.559 "trtype": "TCP", 00:21:22.559 "adrfam": "IPv4", 00:21:22.559 "traddr": "10.0.0.1", 00:21:22.559 "trsvcid": "55326" 00:21:22.559 }, 00:21:22.559 "auth": { 00:21:22.559 "state": "completed", 00:21:22.559 "digest": "sha512", 00:21:22.559 "dhgroup": "ffdhe8192" 00:21:22.559 } 00:21:22.559 } 00:21:22.559 ]' 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.559 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.817 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.817 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.817 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.817 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.817 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.076 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:23.076 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:01:MTg5YTVjNWEzZDZhNzIyOTAyM2JmNzQwOGJmMzk5NGXBGus0: 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.644 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.212 00:21:24.212 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.212 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.212 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.469 { 00:21:24.469 "cntlid": 143, 00:21:24.469 "qid": 0, 00:21:24.469 "state": "enabled", 00:21:24.469 "thread": "nvmf_tgt_poll_group_000", 00:21:24.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.469 "listen_address": { 00:21:24.469 "trtype": "TCP", 00:21:24.469 "adrfam": "IPv4", 00:21:24.469 "traddr": "10.0.0.2", 00:21:24.469 "trsvcid": "4420" 00:21:24.469 }, 00:21:24.469 "peer_address": { 00:21:24.469 "trtype": "TCP", 00:21:24.469 "adrfam": "IPv4", 00:21:24.469 "traddr": "10.0.0.1", 00:21:24.469 "trsvcid": "55352" 00:21:24.469 }, 00:21:24.469 "auth": { 00:21:24.469 "state": "completed", 00:21:24.469 "digest": "sha512", 00:21:24.469 "dhgroup": "ffdhe8192" 00:21:24.469 } 00:21:24.469 } 00:21:24.469 ]' 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.469 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.727 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:24.727 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.294 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.552 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.553 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.121 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.121 { 00:21:26.121 "cntlid": 145, 00:21:26.121 "qid": 0, 00:21:26.121 "state": "enabled", 00:21:26.121 "thread": "nvmf_tgt_poll_group_000", 00:21:26.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.121 "listen_address": { 00:21:26.121 "trtype": "TCP", 00:21:26.121 "adrfam": "IPv4", 00:21:26.121 "traddr": "10.0.0.2", 00:21:26.121 "trsvcid": "4420" 00:21:26.121 }, 00:21:26.121 "peer_address": { 00:21:26.121 "trtype": "TCP", 00:21:26.121 "adrfam": "IPv4", 00:21:26.121 "traddr": "10.0.0.1", 00:21:26.121 "trsvcid": "55384" 00:21:26.121 }, 00:21:26.121 "auth": { 00:21:26.121 "state": "completed", 00:21:26.121 "digest": "sha512", 00:21:26.121 "dhgroup": "ffdhe8192" 00:21:26.121 } 00:21:26.121 } 00:21:26.121 ]' 00:21:26.121 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.380 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.639 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:26.639 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0NzZkOTJjODQ2ODkxOTFmYWVkMjVhMWUyOWMxY2UwYmM3ZDZlNDE3ZTQ2NWQ07DlUaw==: --dhchap-ctrl-secret DHHC-1:03:ZGY0ODdjM2MzZDc4ZWU0YmIxZTVjYWY2Y2EzZjk1OGI5ODk2NWVkNGE4YzQ1NDAwMjM5OWM1YmY0ZWNlYzdlZUqlC/w=: 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:27.207 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:27.465 request: 00:21:27.465 { 00:21:27.465 "name": "nvme0", 00:21:27.465 "trtype": "tcp", 00:21:27.465 "traddr": "10.0.0.2", 00:21:27.465 "adrfam": "ipv4", 00:21:27.465 "trsvcid": "4420", 00:21:27.465 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.465 "prchk_reftag": false, 00:21:27.465 "prchk_guard": false, 00:21:27.465 "hdgst": false, 00:21:27.465 "ddgst": false, 00:21:27.465 "dhchap_key": "key2", 00:21:27.465 "allow_unrecognized_csi": false, 00:21:27.465 "method": "bdev_nvme_attach_controller", 00:21:27.465 "req_id": 1 00:21:27.465 } 00:21:27.465 Got JSON-RPC error response 00:21:27.465 response: 00:21:27.466 { 00:21:27.466 "code": -5, 00:21:27.466 "message": "Input/output error" 00:21:27.466 } 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.725 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.984 request: 00:21:27.984 { 00:21:27.984 "name": "nvme0", 00:21:27.984 "trtype": "tcp", 00:21:27.984 "traddr": "10.0.0.2", 00:21:27.984 "adrfam": "ipv4", 00:21:27.984 "trsvcid": "4420", 00:21:27.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.984 "prchk_reftag": false, 00:21:27.984 "prchk_guard": false, 00:21:27.984 "hdgst": false, 00:21:27.984 "ddgst": false, 00:21:27.984 "dhchap_key": "key1", 00:21:27.984 "dhchap_ctrlr_key": "ckey2", 00:21:27.984 "allow_unrecognized_csi": false, 00:21:27.984 "method": "bdev_nvme_attach_controller", 00:21:27.984 "req_id": 1 00:21:27.984 } 00:21:27.984 Got JSON-RPC error response 00:21:27.984 response: 00:21:27.984 { 00:21:27.984 "code": -5, 00:21:27.984 "message": "Input/output error" 00:21:27.984 } 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.984 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.243 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.502 request: 00:21:28.502 { 00:21:28.502 "name": "nvme0", 00:21:28.502 "trtype": "tcp", 00:21:28.502 "traddr": "10.0.0.2", 00:21:28.502 "adrfam": "ipv4", 00:21:28.502 "trsvcid": "4420", 00:21:28.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.502 "prchk_reftag": false, 00:21:28.502 "prchk_guard": false, 00:21:28.502 "hdgst": false, 00:21:28.502 "ddgst": false, 00:21:28.502 "dhchap_key": "key1", 00:21:28.502 "dhchap_ctrlr_key": "ckey1", 00:21:28.502 "allow_unrecognized_csi": false, 00:21:28.502 "method": "bdev_nvme_attach_controller", 00:21:28.502 "req_id": 1 00:21:28.502 } 00:21:28.502 Got JSON-RPC error response 00:21:28.502 response: 00:21:28.502 { 00:21:28.502 "code": -5, 00:21:28.502 "message": "Input/output error" 00:21:28.502 } 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 314340 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314340 ']' 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314340 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.502 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314340 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314340' 00:21:28.761 killing process with pid 314340 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314340 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314340 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=336329 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 336329 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336329 ']' 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.761 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 336329 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336329 ']' 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.020 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.280 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:29.280 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:29.280 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 null0 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3pz 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.9Um ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Um 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BLx 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.vvG ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vvG 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pQp 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.280 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.TxB ]] 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TxB 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:29.538 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.l9G 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.539 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.106 nvme0n1 00:21:30.106 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.106 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.106 22:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.365 { 00:21:30.365 "cntlid": 1, 00:21:30.365 "qid": 0, 00:21:30.365 "state": "enabled", 00:21:30.365 "thread": "nvmf_tgt_poll_group_000", 00:21:30.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.365 "listen_address": { 00:21:30.365 "trtype": "TCP", 00:21:30.365 "adrfam": "IPv4", 00:21:30.365 "traddr": "10.0.0.2", 00:21:30.365 "trsvcid": "4420" 00:21:30.365 }, 00:21:30.365 "peer_address": { 00:21:30.365 "trtype": "TCP", 00:21:30.365 "adrfam": "IPv4", 00:21:30.365 "traddr": "10.0.0.1", 00:21:30.365 "trsvcid": "36698" 00:21:30.365 }, 00:21:30.365 "auth": { 00:21:30.365 "state": "completed", 00:21:30.365 "digest": "sha512", 00:21:30.365 "dhgroup": "ffdhe8192" 00:21:30.365 } 00:21:30.365 } 00:21:30.365 ]' 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.365 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.366 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.366 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.624 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.624 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.624 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.624 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:30.624 22:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:31.192 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:31.451 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.452 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.711 request: 00:21:31.711 { 00:21:31.711 "name": "nvme0", 00:21:31.711 "trtype": "tcp", 00:21:31.711 "traddr": "10.0.0.2", 00:21:31.711 "adrfam": "ipv4", 00:21:31.711 "trsvcid": "4420", 00:21:31.711 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.711 "prchk_reftag": false, 00:21:31.711 "prchk_guard": false, 00:21:31.711 "hdgst": false, 00:21:31.711 "ddgst": false, 00:21:31.711 "dhchap_key": "key3", 00:21:31.711 "allow_unrecognized_csi": false, 00:21:31.711 "method": "bdev_nvme_attach_controller", 00:21:31.711 "req_id": 1 00:21:31.711 } 00:21:31.711 Got JSON-RPC error response 00:21:31.711 response: 00:21:31.711 { 00:21:31.711 "code": -5, 00:21:31.711 "message": "Input/output error" 00:21:31.711 } 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:31.711 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.973 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.241 request: 00:21:32.242 { 00:21:32.242 "name": "nvme0", 00:21:32.242 "trtype": "tcp", 00:21:32.242 "traddr": "10.0.0.2", 00:21:32.242 "adrfam": "ipv4", 00:21:32.242 "trsvcid": "4420", 00:21:32.242 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:32.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.242 "prchk_reftag": false, 00:21:32.242 "prchk_guard": false, 00:21:32.242 "hdgst": false, 00:21:32.242 "ddgst": false, 00:21:32.242 "dhchap_key": "key3", 00:21:32.242 "allow_unrecognized_csi": false, 00:21:32.242 "method": "bdev_nvme_attach_controller", 00:21:32.242 "req_id": 1 00:21:32.242 } 00:21:32.242 Got JSON-RPC error response 00:21:32.242 response: 00:21:32.242 { 00:21:32.242 "code": -5, 00:21:32.242 "message": "Input/output error" 00:21:32.242 } 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.242 22:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.501 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.760 request: 00:21:32.760 { 00:21:32.760 "name": "nvme0", 00:21:32.760 "trtype": "tcp", 00:21:32.760 "traddr": "10.0.0.2", 00:21:32.760 "adrfam": "ipv4", 00:21:32.760 "trsvcid": "4420", 00:21:32.760 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:32.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.760 "prchk_reftag": false, 00:21:32.760 "prchk_guard": false, 00:21:32.760 "hdgst": false, 00:21:32.760 "ddgst": false, 00:21:32.760 "dhchap_key": "key0", 00:21:32.760 "dhchap_ctrlr_key": "key1", 00:21:32.760 "allow_unrecognized_csi": false, 00:21:32.760 "method": "bdev_nvme_attach_controller", 00:21:32.760 "req_id": 1 00:21:32.760 } 00:21:32.760 Got JSON-RPC error response 00:21:32.760 response: 00:21:32.760 { 00:21:32.760 "code": -5, 00:21:32.760 "message": "Input/output error" 00:21:32.760 } 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:32.760 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:33.019 nvme0n1 00:21:33.019 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:33.019 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.019 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:33.278 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.278 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.278 22:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:33.538 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:34.106 nvme0n1 00:21:34.106 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:34.106 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:34.106 22:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:34.364 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.623 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.623 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:34.623 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: --dhchap-ctrl-secret DHHC-1:03:NDFkOGFiNzJmNzBmOGM5MWNjMGMyY2Y1N2ExNGI5MzYwZDMwOTUxNjk1YmJlMjVlMmRkNjEyMTY4ZWNhY2VjONslPfo=: 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.190 22:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.449 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:35.449 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:35.449 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:35.450 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:35.708 request: 00:21:35.708 { 00:21:35.708 "name": "nvme0", 00:21:35.708 "trtype": "tcp", 00:21:35.708 "traddr": "10.0.0.2", 00:21:35.708 "adrfam": "ipv4", 00:21:35.708 "trsvcid": "4420", 00:21:35.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.708 "prchk_reftag": false, 00:21:35.708 "prchk_guard": false, 00:21:35.708 "hdgst": false, 00:21:35.708 "ddgst": false, 00:21:35.708 "dhchap_key": "key1", 00:21:35.709 "allow_unrecognized_csi": false, 00:21:35.709 "method": "bdev_nvme_attach_controller", 00:21:35.709 "req_id": 1 00:21:35.709 } 00:21:35.709 Got JSON-RPC error response 00:21:35.709 response: 00:21:35.709 { 00:21:35.709 "code": -5, 00:21:35.709 "message": "Input/output error" 00:21:35.709 } 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.709 22:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:36.645 nvme0n1 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.645 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:36.904 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:37.164 nvme0n1 00:21:37.164 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:37.164 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:37.164 22:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.423 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.423 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.423 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: '' 2s 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: ]] 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWI2NzU5NWY5MWQ1ODAxMzJhYjNiM2ZmM2JkNDZhOTPRWW0V: 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:37.682 22:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:39.584 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:39.584 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: 2s 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: ]] 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGQwMmRhZDY5ZjUyMDUyZTYwNzc4MDQwNGYyMGExZDlkYzhhYmE5YTBmNTM1YWY0oEVfDw==: 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:39.585 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:42.119 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:42.377 nvme0n1 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.377 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.944 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:42.944 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:42.944 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:43.203 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:43.462 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:44.030 request: 00:21:44.030 { 00:21:44.030 "name": "nvme0", 00:21:44.030 "dhchap_key": "key1", 00:21:44.030 "dhchap_ctrlr_key": "key3", 00:21:44.030 "method": "bdev_nvme_set_keys", 00:21:44.030 "req_id": 1 00:21:44.030 } 00:21:44.030 Got JSON-RPC error response 00:21:44.030 response: 00:21:44.030 { 00:21:44.030 "code": -13, 00:21:44.030 "message": "Permission denied" 00:21:44.030 } 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:44.030 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.289 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:44.289 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:45.225 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:45.225 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:45.225 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.484 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:46.052 nvme0n1 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.052 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.626 request: 00:21:46.626 { 00:21:46.626 "name": "nvme0", 00:21:46.626 "dhchap_key": "key2", 00:21:46.626 "dhchap_ctrlr_key": "key0", 00:21:46.626 "method": "bdev_nvme_set_keys", 00:21:46.626 "req_id": 1 00:21:46.626 } 00:21:46.626 Got JSON-RPC error response 00:21:46.626 response: 00:21:46.626 { 00:21:46.626 "code": -13, 00:21:46.626 "message": "Permission denied" 00:21:46.626 } 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:46.626 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.889 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:46.889 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:47.847 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:47.847 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:47.848 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 314431 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314431 ']' 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314431 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314431 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314431' 00:21:48.126 killing process with pid 314431 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314431 00:21:48.126 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314431 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.404 rmmod nvme_tcp 00:21:48.404 rmmod nvme_fabrics 00:21:48.404 rmmod nvme_keyring 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 336329 ']' 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 336329 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 336329 ']' 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 336329 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336329 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336329' 00:21:48.404 killing process with pid 336329 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 336329 00:21:48.404 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 336329 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.668 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3pz /tmp/spdk.key-sha256.BLx /tmp/spdk.key-sha384.pQp /tmp/spdk.key-sha512.l9G /tmp/spdk.key-sha512.9Um /tmp/spdk.key-sha384.vvG /tmp/spdk.key-sha256.TxB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:50.661 00:21:50.661 real 2m34.487s 00:21:50.661 user 5m54.957s 00:21:50.661 sys 0m24.522s 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.661 ************************************ 00:21:50.661 END TEST nvmf_auth_target 00:21:50.661 ************************************ 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.661 22:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:50.948 ************************************ 00:21:50.948 START TEST nvmf_bdevio_no_huge 00:21:50.948 ************************************ 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.948 * Looking for test storage... 00:21:50.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:50.948 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.949 --rc genhtml_branch_coverage=1 00:21:50.949 --rc genhtml_function_coverage=1 00:21:50.949 --rc genhtml_legend=1 00:21:50.949 --rc geninfo_all_blocks=1 00:21:50.949 --rc geninfo_unexecuted_blocks=1 00:21:50.949 00:21:50.949 ' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.949 --rc genhtml_branch_coverage=1 00:21:50.949 --rc genhtml_function_coverage=1 00:21:50.949 --rc genhtml_legend=1 00:21:50.949 --rc geninfo_all_blocks=1 00:21:50.949 --rc geninfo_unexecuted_blocks=1 00:21:50.949 00:21:50.949 ' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.949 --rc genhtml_branch_coverage=1 00:21:50.949 --rc genhtml_function_coverage=1 00:21:50.949 --rc genhtml_legend=1 00:21:50.949 --rc geninfo_all_blocks=1 00:21:50.949 --rc geninfo_unexecuted_blocks=1 00:21:50.949 00:21:50.949 ' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.949 --rc genhtml_branch_coverage=1 00:21:50.949 --rc genhtml_function_coverage=1 00:21:50.949 --rc genhtml_legend=1 00:21:50.949 --rc geninfo_all_blocks=1 00:21:50.949 --rc geninfo_unexecuted_blocks=1 00:21:50.949 00:21:50.949 ' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.949 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.950 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.950 22:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.492 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:56.493 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:56.493 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:56.493 Found net devices under 0000:af:00.0: cvl_0_0 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:56.493 Found net devices under 0000:af:00.1: cvl_0_1 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.493 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:21:56.753 00:21:56.753 --- 10.0.0.2 ping statistics --- 00:21:56.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.753 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:56.753 00:21:56.753 --- 10.0.0.1 ping statistics --- 00:21:56.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.753 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=343087 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 343087 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 343087 ']' 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.753 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.011 [2024-12-14 22:31:17.662982] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:57.011 [2024-12-14 22:31:17.663031] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:57.011 [2024-12-14 22:31:17.745119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.011 [2024-12-14 22:31:17.780985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.011 [2024-12-14 22:31:17.781017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.011 [2024-12-14 22:31:17.781024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.012 [2024-12-14 22:31:17.781029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.012 [2024-12-14 22:31:17.781036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.012 [2024-12-14 22:31:17.781978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.012 [2024-12-14 22:31:17.782085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:21:57.012 [2024-12-14 22:31:17.782189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.012 [2024-12-14 22:31:17.782190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:21:57.012 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.012 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:57.012 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.012 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.012 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.269 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.269 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.270 [2024-12-14 22:31:17.922267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.270 Malloc0 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.270 [2024-12-14 22:31:17.966549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:57.270 { 00:21:57.270 "params": { 00:21:57.270 "name": "Nvme$subsystem", 00:21:57.270 "trtype": "$TEST_TRANSPORT", 00:21:57.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.270 "adrfam": "ipv4", 00:21:57.270 "trsvcid": "$NVMF_PORT", 00:21:57.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.270 "hdgst": ${hdgst:-false}, 00:21:57.270 "ddgst": ${ddgst:-false} 00:21:57.270 }, 00:21:57.270 "method": "bdev_nvme_attach_controller" 00:21:57.270 } 00:21:57.270 EOF 00:21:57.270 )") 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:57.270 22:31:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:57.270 "params": { 00:21:57.270 "name": "Nvme1", 00:21:57.270 "trtype": "tcp", 00:21:57.270 "traddr": "10.0.0.2", 00:21:57.270 "adrfam": "ipv4", 00:21:57.270 "trsvcid": "4420", 00:21:57.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.270 "hdgst": false, 00:21:57.270 "ddgst": false 00:21:57.270 }, 00:21:57.270 "method": "bdev_nvme_attach_controller" 00:21:57.270 }' 00:21:57.270 [2024-12-14 22:31:18.016385] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:57.270 [2024-12-14 22:31:18.016432] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid343142 ] 00:21:57.270 [2024-12-14 22:31:18.095036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:57.270 [2024-12-14 22:31:18.132468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.270 [2024-12-14 22:31:18.132574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.270 [2024-12-14 22:31:18.132575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.528 I/O targets: 00:21:57.528 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:57.528 00:21:57.528 00:21:57.528 CUnit - A unit testing framework for C - Version 2.1-3 00:21:57.528 http://cunit.sourceforge.net/ 00:21:57.528 00:21:57.528 00:21:57.528 Suite: bdevio tests on: Nvme1n1 00:21:57.528 Test: blockdev write read block ...passed 00:21:57.528 Test: blockdev write zeroes read block ...passed 00:21:57.528 Test: blockdev write zeroes read no split ...passed 00:21:57.528 Test: blockdev write zeroes read split ...passed 00:21:57.786 Test: blockdev write zeroes read split partial ...passed 00:21:57.786 Test: blockdev reset ...[2024-12-14 22:31:18.419078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:57.786 [2024-12-14 22:31:18.419139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60ea0 (9): Bad file descriptor 00:21:57.786 [2024-12-14 22:31:18.434307] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:57.786 passed 00:21:57.786 Test: blockdev write read 8 blocks ...passed 00:21:57.786 Test: blockdev write read size > 128k ...passed 00:21:57.786 Test: blockdev write read invalid size ...passed 00:21:57.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:57.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:57.786 Test: blockdev write read max offset ...passed 00:21:57.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:57.786 Test: blockdev writev readv 8 blocks ...passed 00:21:57.786 Test: blockdev writev readv 30 x 1block ...passed 00:21:57.786 Test: blockdev writev readv block ...passed 00:21:58.043 Test: blockdev writev readv size > 128k ...passed 00:21:58.043 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:58.043 Test: blockdev comparev and writev ...[2024-12-14 22:31:18.686766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.686796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.686813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.686821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.043 [2024-12-14 22:31:18.687597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.043 passed 00:21:58.043 Test: blockdev nvme passthru rw ...passed 00:21:58.043 Test: blockdev nvme passthru vendor specific ...[2024-12-14 22:31:18.769226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.043 [2024-12-14 22:31:18.769243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.769349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.043 [2024-12-14 22:31:18.769358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.769457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.043 [2024-12-14 22:31:18.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.043 [2024-12-14 22:31:18.769571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.043 [2024-12-14 22:31:18.769580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.043 passed 00:21:58.043 Test: blockdev nvme admin passthru ...passed 00:21:58.043 Test: blockdev copy ...passed 00:21:58.043 00:21:58.043 Run Summary: Type Total Ran Passed Failed Inactive 00:21:58.044 suites 1 1 n/a 0 0 00:21:58.044 tests 23 23 23 0 0 00:21:58.044 asserts 152 152 152 0 n/a 00:21:58.044 00:21:58.044 Elapsed time = 1.061 seconds 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.302 rmmod nvme_tcp 00:21:58.302 rmmod nvme_fabrics 00:21:58.302 rmmod nvme_keyring 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 343087 ']' 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 343087 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 343087 ']' 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 343087 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.302 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343087 00:21:58.560 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:58.560 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:58.560 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343087' 00:21:58.560 killing process with pid 343087 00:21:58.560 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 343087 00:21:58.560 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 343087 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.819 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.724 00:22:00.724 real 0m10.006s 00:22:00.724 user 0m10.205s 00:22:00.724 sys 0m5.224s 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:00.724 ************************************ 00:22:00.724 END TEST nvmf_bdevio_no_huge 00:22:00.724 ************************************ 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.724 22:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.983 ************************************ 00:22:00.983 START TEST nvmf_tls 00:22:00.983 ************************************ 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:00.983 * Looking for test storage... 00:22:00.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.983 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.984 --rc genhtml_branch_coverage=1 00:22:00.984 --rc genhtml_function_coverage=1 00:22:00.984 --rc genhtml_legend=1 00:22:00.984 --rc geninfo_all_blocks=1 00:22:00.984 --rc geninfo_unexecuted_blocks=1 00:22:00.984 00:22:00.984 ' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.984 --rc genhtml_branch_coverage=1 00:22:00.984 --rc genhtml_function_coverage=1 00:22:00.984 --rc genhtml_legend=1 00:22:00.984 --rc geninfo_all_blocks=1 00:22:00.984 --rc geninfo_unexecuted_blocks=1 00:22:00.984 00:22:00.984 ' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.984 --rc genhtml_branch_coverage=1 00:22:00.984 --rc genhtml_function_coverage=1 00:22:00.984 --rc genhtml_legend=1 00:22:00.984 --rc geninfo_all_blocks=1 00:22:00.984 --rc geninfo_unexecuted_blocks=1 00:22:00.984 00:22:00.984 ' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.984 --rc genhtml_branch_coverage=1 00:22:00.984 --rc genhtml_function_coverage=1 00:22:00.984 --rc genhtml_legend=1 00:22:00.984 --rc geninfo_all_blocks=1 00:22:00.984 --rc geninfo_unexecuted_blocks=1 00:22:00.984 00:22:00.984 ' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.984 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:07.553 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:07.553 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.553 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:07.554 Found net devices under 0000:af:00.0: cvl_0_0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:07.554 Found net devices under 0000:af:00.1: cvl_0_1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:22:07.554 00:22:07.554 --- 10.0.0.2 ping statistics --- 00:22:07.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.554 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:07.554 00:22:07.554 --- 10.0.0.1 ping statistics --- 00:22:07.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.554 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=346804 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 346804 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346804 ']' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 [2024-12-14 22:31:27.790339] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:07.554 [2024-12-14 22:31:27.790385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.554 [2024-12-14 22:31:27.869919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.554 [2024-12-14 22:31:27.891290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.554 [2024-12-14 22:31:27.891325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.554 [2024-12-14 22:31:27.891332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.554 [2024-12-14 22:31:27.891338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.554 [2024-12-14 22:31:27.891342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.554 [2024-12-14 22:31:27.891849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:07.554 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:07.554 true 00:22:07.554 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:07.554 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:07.554 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:07.554 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:07.554 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:07.813 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:07.813 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.072 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:08.072 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:08.072 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:08.072 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.072 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:08.330 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:08.330 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:08.330 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.330 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:08.589 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:08.589 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:08.589 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:08.848 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:08.848 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.848 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:08.848 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:08.848 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:09.107 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.107 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:09.365 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZaZq7rodYm 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.qn5yopYETN 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZaZq7rodYm 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.qn5yopYETN 00:22:09.366 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:09.624 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:09.883 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZaZq7rodYm 00:22:09.883 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZaZq7rodYm 00:22:09.883 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:10.142 [2024-12-14 22:31:30.773333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.142 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:10.142 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:10.400 [2024-12-14 22:31:31.150274] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.400 [2024-12-14 22:31:31.150515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.400 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:10.659 malloc0 00:22:10.659 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:10.659 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZaZq7rodYm 00:22:10.918 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:11.176 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZaZq7rodYm 00:22:21.150 Initializing NVMe Controllers 00:22:21.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.150 Initialization complete. Launching workers. 00:22:21.150 ======================================================== 00:22:21.150 Latency(us) 00:22:21.150 Device Information : IOPS MiB/s Average min max 00:22:21.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16789.17 65.58 3812.06 818.11 5005.63 00:22:21.150 ======================================================== 00:22:21.150 Total : 16789.17 65.58 3812.06 818.11 5005.63 00:22:21.150 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZaZq7rodYm 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZaZq7rodYm 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=349278 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 349278 /var/tmp/bdevperf.sock 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349278 ']' 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 [2024-12-14 22:31:42.081071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:21.409 [2024-12-14 22:31:42.081119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349278 ] 00:22:21.409 [2024-12-14 22:31:42.155141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.409 [2024-12-14 22:31:42.177841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:21.409 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZaZq7rodYm 00:22:21.667 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:21.926 [2024-12-14 22:31:42.617966] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.926 TLSTESTn1 00:22:21.926 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.926 Running I/O for 10 seconds... 00:22:24.238 4316.00 IOPS, 16.86 MiB/s [2024-12-14T21:31:46.057Z] 4776.00 IOPS, 18.66 MiB/s [2024-12-14T21:31:46.993Z] 5047.00 IOPS, 19.71 MiB/s [2024-12-14T21:31:47.928Z] 5023.25 IOPS, 19.62 MiB/s [2024-12-14T21:31:48.864Z] 5024.80 IOPS, 19.63 MiB/s [2024-12-14T21:31:50.242Z] 5110.50 IOPS, 19.96 MiB/s [2024-12-14T21:31:51.178Z] 5170.29 IOPS, 20.20 MiB/s [2024-12-14T21:31:52.115Z] 5211.88 IOPS, 20.36 MiB/s [2024-12-14T21:31:53.051Z] 5256.33 IOPS, 20.53 MiB/s [2024-12-14T21:31:53.051Z] 5286.90 IOPS, 20.65 MiB/s 00:22:32.167 Latency(us) 00:22:32.167 [2024-12-14T21:31:53.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.167 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.167 Verification LBA range: start 0x0 length 0x2000 00:22:32.167 TLSTESTn1 : 10.01 5292.54 20.67 0.00 0.00 24150.34 5242.88 69405.74 00:22:32.167 [2024-12-14T21:31:53.051Z] =================================================================================================================== 00:22:32.167 [2024-12-14T21:31:53.051Z] Total : 5292.54 20.67 0.00 0.00 24150.34 5242.88 69405.74 00:22:32.167 { 00:22:32.167 "results": [ 00:22:32.167 { 00:22:32.167 "job": "TLSTESTn1", 00:22:32.167 "core_mask": "0x4", 00:22:32.167 "workload": "verify", 00:22:32.167 "status": "finished", 00:22:32.167 "verify_range": { 00:22:32.167 "start": 0, 00:22:32.167 "length": 8192 00:22:32.167 }, 00:22:32.167 "queue_depth": 128, 00:22:32.167 "io_size": 4096, 00:22:32.167 "runtime": 10.013159, 00:22:32.167 "iops": 5292.535552466509, 00:22:32.167 "mibps": 20.6739670018223, 00:22:32.167 "io_failed": 0, 00:22:32.167 "io_timeout": 0, 00:22:32.167 "avg_latency_us": 24150.34356369649, 00:22:32.167 "min_latency_us": 5242.88, 00:22:32.167 "max_latency_us": 69405.74476190476 00:22:32.167 } 00:22:32.167 ], 00:22:32.167 "core_count": 1 00:22:32.167 } 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 349278 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349278 ']' 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349278 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349278 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349278' 00:22:32.167 killing process with pid 349278 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349278 00:22:32.167 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.167 00:22:32.167 Latency(us) 00:22:32.167 [2024-12-14T21:31:53.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.167 [2024-12-14T21:31:53.051Z] =================================================================================================================== 00:22:32.167 [2024-12-14T21:31:53.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.167 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349278 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qn5yopYETN 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qn5yopYETN 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qn5yopYETN 00:22:32.426 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qn5yopYETN 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350923 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350923 /var/tmp/bdevperf.sock 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350923 ']' 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.427 [2024-12-14 22:31:53.114137] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:32.427 [2024-12-14 22:31:53.114182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350923 ] 00:22:32.427 [2024-12-14 22:31:53.185660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.427 [2024-12-14 22:31:53.208235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.427 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qn5yopYETN 00:22:32.685 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.944 [2024-12-14 22:31:53.639494] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.944 [2024-12-14 22:31:53.648909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:32.944 [2024-12-14 22:31:53.649798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x593340 (107): Transport endpoint is not connected 00:22:32.944 [2024-12-14 22:31:53.650791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x593340 (9): Bad file descriptor 00:22:32.944 [2024-12-14 22:31:53.651793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:32.944 [2024-12-14 22:31:53.651802] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:32.944 [2024-12-14 22:31:53.651810] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:32.944 [2024-12-14 22:31:53.651818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:32.944 request: 00:22:32.944 { 00:22:32.944 "name": "TLSTEST", 00:22:32.944 "trtype": "tcp", 00:22:32.944 "traddr": "10.0.0.2", 00:22:32.944 "adrfam": "ipv4", 00:22:32.944 "trsvcid": "4420", 00:22:32.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.944 "prchk_reftag": false, 00:22:32.944 "prchk_guard": false, 00:22:32.944 "hdgst": false, 00:22:32.944 "ddgst": false, 00:22:32.944 "psk": "key0", 00:22:32.944 "allow_unrecognized_csi": false, 00:22:32.944 "method": "bdev_nvme_attach_controller", 00:22:32.944 "req_id": 1 00:22:32.944 } 00:22:32.944 Got JSON-RPC error response 00:22:32.945 response: 00:22:32.945 { 00:22:32.945 "code": -5, 00:22:32.945 "message": "Input/output error" 00:22:32.945 } 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 350923 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350923 ']' 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350923 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350923 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350923' 00:22:32.945 killing process with pid 350923 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350923 00:22:32.945 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.945 00:22:32.945 Latency(us) 00:22:32.945 [2024-12-14T21:31:53.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.945 [2024-12-14T21:31:53.829Z] =================================================================================================================== 00:22:32.945 [2024-12-14T21:31:53.829Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.945 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350923 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZaZq7rodYm 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZaZq7rodYm 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZaZq7rodYm 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZaZq7rodYm 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351102 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351102 /var/tmp/bdevperf.sock 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351102 ']' 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.204 22:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.204 [2024-12-14 22:31:53.921174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:33.204 [2024-12-14 22:31:53.921226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351102 ] 00:22:33.204 [2024-12-14 22:31:53.989173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.204 [2024-12-14 22:31:54.008742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.463 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.463 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.463 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZaZq7rodYm 00:22:33.463 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:33.722 [2024-12-14 22:31:54.471855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.722 [2024-12-14 22:31:54.480922] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:33.722 [2024-12-14 22:31:54.480945] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:33.722 [2024-12-14 22:31:54.480968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:33.722 [2024-12-14 22:31:54.481059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2c340 (107): Transport endpoint is not connected 00:22:33.722 [2024-12-14 22:31:54.482053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2c340 (9): Bad file descriptor 00:22:33.722 [2024-12-14 22:31:54.483055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:33.722 [2024-12-14 22:31:54.483068] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:33.722 [2024-12-14 22:31:54.483075] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:33.722 [2024-12-14 22:31:54.483083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:33.722 request: 00:22:33.722 { 00:22:33.722 "name": "TLSTEST", 00:22:33.722 "trtype": "tcp", 00:22:33.722 "traddr": "10.0.0.2", 00:22:33.722 "adrfam": "ipv4", 00:22:33.722 "trsvcid": "4420", 00:22:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.722 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.722 "prchk_reftag": false, 00:22:33.722 "prchk_guard": false, 00:22:33.722 "hdgst": false, 00:22:33.722 "ddgst": false, 00:22:33.722 "psk": "key0", 00:22:33.722 "allow_unrecognized_csi": false, 00:22:33.722 "method": "bdev_nvme_attach_controller", 00:22:33.722 "req_id": 1 00:22:33.722 } 00:22:33.722 Got JSON-RPC error response 00:22:33.722 response: 00:22:33.722 { 00:22:33.722 "code": -5, 00:22:33.722 "message": "Input/output error" 00:22:33.722 } 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351102 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351102 ']' 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351102 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351102 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351102' 00:22:33.722 killing process with pid 351102 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351102 00:22:33.722 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.722 00:22:33.722 Latency(us) 00:22:33.722 [2024-12-14T21:31:54.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.722 [2024-12-14T21:31:54.606Z] =================================================================================================================== 00:22:33.722 [2024-12-14T21:31:54.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.722 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351102 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZaZq7rodYm 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZaZq7rodYm 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZaZq7rodYm 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZaZq7rodYm 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351319 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351319 /var/tmp/bdevperf.sock 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351319 ']' 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.982 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.982 [2024-12-14 22:31:54.760604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:33.982 [2024-12-14 22:31:54.760656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351319 ] 00:22:33.982 [2024-12-14 22:31:54.836407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.982 [2024-12-14 22:31:54.856628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.241 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.241 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:34.241 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZaZq7rodYm 00:22:34.499 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.499 [2024-12-14 22:31:55.279429] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.499 [2024-12-14 22:31:55.283994] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.499 [2024-12-14 22:31:55.284017] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.499 [2024-12-14 22:31:55.284055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.499 [2024-12-14 22:31:55.284710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ad340 (107): Transport endpoint is not connected 00:22:34.499 [2024-12-14 22:31:55.285702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ad340 (9): Bad file descriptor 00:22:34.499 [2024-12-14 22:31:55.286703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:34.499 [2024-12-14 22:31:55.286711] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.499 [2024-12-14 22:31:55.286719] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:34.499 [2024-12-14 22:31:55.286726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:34.499 request: 00:22:34.499 { 00:22:34.499 "name": "TLSTEST", 00:22:34.499 "trtype": "tcp", 00:22:34.499 "traddr": "10.0.0.2", 00:22:34.499 "adrfam": "ipv4", 00:22:34.499 "trsvcid": "4420", 00:22:34.499 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.499 "prchk_reftag": false, 00:22:34.499 "prchk_guard": false, 00:22:34.499 "hdgst": false, 00:22:34.499 "ddgst": false, 00:22:34.499 "psk": "key0", 00:22:34.499 "allow_unrecognized_csi": false, 00:22:34.499 "method": "bdev_nvme_attach_controller", 00:22:34.499 "req_id": 1 00:22:34.499 } 00:22:34.499 Got JSON-RPC error response 00:22:34.499 response: 00:22:34.499 { 00:22:34.499 "code": -5, 00:22:34.499 "message": "Input/output error" 00:22:34.499 } 00:22:34.499 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351319 00:22:34.499 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351319 ']' 00:22:34.499 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351319 00:22:34.499 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351319 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351319' 00:22:34.500 killing process with pid 351319 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351319 00:22:34.500 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.500 00:22:34.500 Latency(us) 00:22:34.500 [2024-12-14T21:31:55.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.500 [2024-12-14T21:31:55.384Z] =================================================================================================================== 00:22:34.500 [2024-12-14T21:31:55.384Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.500 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351319 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351347 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351347 /var/tmp/bdevperf.sock 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351347 ']' 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.758 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.758 [2024-12-14 22:31:55.535480] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:34.758 [2024-12-14 22:31:55.535546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351347 ] 00:22:34.758 [2024-12-14 22:31:55.600897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.758 [2024-12-14 22:31:55.620429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.017 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.017 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:35.017 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:35.017 [2024-12-14 22:31:55.874775] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:35.017 [2024-12-14 22:31:55.874802] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:35.017 request: 00:22:35.017 { 00:22:35.017 "name": "key0", 00:22:35.017 "path": "", 00:22:35.017 "method": "keyring_file_add_key", 00:22:35.017 "req_id": 1 00:22:35.017 } 00:22:35.017 Got JSON-RPC error response 00:22:35.017 response: 00:22:35.017 { 00:22:35.017 "code": -1, 00:22:35.017 "message": "Operation not permitted" 00:22:35.017 } 00:22:35.276 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:35.276 [2024-12-14 22:31:56.087407] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.276 [2024-12-14 22:31:56.087435] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:35.276 request: 00:22:35.276 { 00:22:35.276 "name": "TLSTEST", 00:22:35.276 "trtype": "tcp", 00:22:35.276 "traddr": "10.0.0.2", 00:22:35.276 "adrfam": "ipv4", 00:22:35.276 "trsvcid": "4420", 00:22:35.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.276 "prchk_reftag": false, 00:22:35.276 "prchk_guard": false, 00:22:35.276 "hdgst": false, 00:22:35.276 "ddgst": false, 00:22:35.276 "psk": "key0", 00:22:35.276 "allow_unrecognized_csi": false, 00:22:35.276 "method": "bdev_nvme_attach_controller", 00:22:35.276 "req_id": 1 00:22:35.276 } 00:22:35.276 Got JSON-RPC error response 00:22:35.276 response: 00:22:35.276 { 00:22:35.276 "code": -126, 00:22:35.277 "message": "Required key not available" 00:22:35.277 } 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351347 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351347 ']' 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351347 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.277 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351347 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351347' 00:22:35.536 killing process with pid 351347 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351347 00:22:35.536 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.536 00:22:35.536 Latency(us) 00:22:35.536 [2024-12-14T21:31:56.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.536 [2024-12-14T21:31:56.420Z] =================================================================================================================== 00:22:35.536 [2024-12-14T21:31:56.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351347 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 346804 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346804 ']' 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346804 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346804 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346804' 00:22:35.536 killing process with pid 346804 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346804 00:22:35.536 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346804 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rztsbDaX4n 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rztsbDaX4n 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351588 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351588 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351588 ']' 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.796 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.796 [2024-12-14 22:31:56.631720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:35.796 [2024-12-14 22:31:56.631765] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.056 [2024-12-14 22:31:56.707046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.056 [2024-12-14 22:31:56.727580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.056 [2024-12-14 22:31:56.727616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.056 [2024-12-14 22:31:56.727623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.056 [2024-12-14 22:31:56.727630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.056 [2024-12-14 22:31:56.727634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.056 [2024-12-14 22:31:56.728142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.056 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.056 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rztsbDaX4n 00:22:36.057 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.315 [2024-12-14 22:31:57.026709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.315 22:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:36.575 22:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.575 [2024-12-14 22:31:57.431779] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.575 [2024-12-14 22:31:57.431976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.834 22:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:36.834 malloc0 00:22:36.834 22:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.093 22:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:37.352 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rztsbDaX4n 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rztsbDaX4n 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351838 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351838 /var/tmp/bdevperf.sock 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351838 ']' 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.612 [2024-12-14 22:31:58.270759] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:37.612 [2024-12-14 22:31:58.270805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351838 ] 00:22:37.612 [2024-12-14 22:31:58.343351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.612 [2024-12-14 22:31:58.365507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.612 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:37.871 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:38.130 [2024-12-14 22:31:58.848650] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.130 TLSTESTn1 00:22:38.130 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:38.389 Running I/O for 10 seconds... 00:22:40.267 3872.00 IOPS, 15.12 MiB/s [2024-12-14T21:32:02.088Z] 4662.00 IOPS, 18.21 MiB/s [2024-12-14T21:32:03.466Z] 4483.33 IOPS, 17.51 MiB/s [2024-12-14T21:32:04.402Z] 4685.00 IOPS, 18.30 MiB/s [2024-12-14T21:32:05.339Z] 4831.40 IOPS, 18.87 MiB/s [2024-12-14T21:32:06.275Z] 4939.50 IOPS, 19.29 MiB/s [2024-12-14T21:32:07.212Z] 4983.00 IOPS, 19.46 MiB/s [2024-12-14T21:32:08.149Z] 5066.38 IOPS, 19.79 MiB/s [2024-12-14T21:32:09.086Z] 5127.22 IOPS, 20.03 MiB/s [2024-12-14T21:32:09.086Z] 5134.20 IOPS, 20.06 MiB/s 00:22:48.202 Latency(us) 00:22:48.202 [2024-12-14T21:32:09.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.203 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.203 Verification LBA range: start 0x0 length 0x2000 00:22:48.203 TLSTESTn1 : 10.02 5137.20 20.07 0.00 0.00 24878.67 4712.35 37199.48 00:22:48.203 [2024-12-14T21:32:09.087Z] =================================================================================================================== 00:22:48.203 [2024-12-14T21:32:09.087Z] Total : 5137.20 20.07 0.00 0.00 24878.67 4712.35 37199.48 00:22:48.203 { 00:22:48.203 "results": [ 00:22:48.203 { 00:22:48.203 "job": "TLSTESTn1", 00:22:48.203 "core_mask": "0x4", 00:22:48.203 "workload": "verify", 00:22:48.203 "status": "finished", 00:22:48.203 "verify_range": { 00:22:48.203 "start": 0, 00:22:48.203 "length": 8192 00:22:48.203 }, 00:22:48.203 "queue_depth": 128, 00:22:48.203 "io_size": 4096, 00:22:48.203 "runtime": 10.018881, 00:22:48.203 "iops": 5137.200451826906, 00:22:48.203 "mibps": 20.06718926494885, 00:22:48.203 "io_failed": 0, 00:22:48.203 "io_timeout": 0, 00:22:48.203 "avg_latency_us": 24878.665518976286, 00:22:48.203 "min_latency_us": 4712.350476190476, 00:22:48.203 "max_latency_us": 37199.4819047619 00:22:48.203 } 00:22:48.203 ], 00:22:48.203 "core_count": 1 00:22:48.203 } 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 351838 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351838 ']' 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351838 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351838 00:22:48.462 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351838' 00:22:48.463 killing process with pid 351838 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351838 00:22:48.463 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.463 00:22:48.463 Latency(us) 00:22:48.463 [2024-12-14T21:32:09.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.463 [2024-12-14T21:32:09.347Z] =================================================================================================================== 00:22:48.463 [2024-12-14T21:32:09.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351838 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rztsbDaX4n 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rztsbDaX4n 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rztsbDaX4n 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rztsbDaX4n 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rztsbDaX4n 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353618 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353618 /var/tmp/bdevperf.sock 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353618 ']' 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.463 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.722 [2024-12-14 22:32:09.364228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:48.722 [2024-12-14 22:32:09.364278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353618 ] 00:22:48.722 [2024-12-14 22:32:09.435525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.722 [2024-12-14 22:32:09.455137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.722 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.722 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.722 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:48.981 [2024-12-14 22:32:09.717869] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rztsbDaX4n': 0100666 00:22:48.981 [2024-12-14 22:32:09.717900] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:48.981 request: 00:22:48.981 { 00:22:48.981 "name": "key0", 00:22:48.981 "path": "/tmp/tmp.rztsbDaX4n", 00:22:48.981 "method": "keyring_file_add_key", 00:22:48.981 "req_id": 1 00:22:48.981 } 00:22:48.981 Got JSON-RPC error response 00:22:48.981 response: 00:22:48.981 { 00:22:48.981 "code": -1, 00:22:48.981 "message": "Operation not permitted" 00:22:48.981 } 00:22:48.981 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.240 [2024-12-14 22:32:09.914449] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.240 [2024-12-14 22:32:09.914480] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:49.240 request: 00:22:49.240 { 00:22:49.240 "name": "TLSTEST", 00:22:49.240 "trtype": "tcp", 00:22:49.240 "traddr": "10.0.0.2", 00:22:49.240 "adrfam": "ipv4", 00:22:49.240 "trsvcid": "4420", 00:22:49.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.240 "prchk_reftag": false, 00:22:49.240 "prchk_guard": false, 00:22:49.240 "hdgst": false, 00:22:49.240 "ddgst": false, 00:22:49.240 "psk": "key0", 00:22:49.240 "allow_unrecognized_csi": false, 00:22:49.240 "method": "bdev_nvme_attach_controller", 00:22:49.240 "req_id": 1 00:22:49.240 } 00:22:49.240 Got JSON-RPC error response 00:22:49.240 response: 00:22:49.240 { 00:22:49.240 "code": -126, 00:22:49.241 "message": "Required key not available" 00:22:49.241 } 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353618 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353618 ']' 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353618 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.241 22:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353618 00:22:49.241 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.241 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.241 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353618' 00:22:49.241 killing process with pid 353618 00:22:49.241 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353618 00:22:49.241 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.241 00:22:49.241 Latency(us) 00:22:49.241 [2024-12-14T21:32:10.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.241 [2024-12-14T21:32:10.125Z] =================================================================================================================== 00:22:49.241 [2024-12-14T21:32:10.125Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.241 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353618 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351588 ']' 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351588' 00:22:49.500 killing process with pid 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351588 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=353853 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 353853 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353853 ']' 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.500 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.760 [2024-12-14 22:32:10.429364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:49.760 [2024-12-14 22:32:10.429410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.760 [2024-12-14 22:32:10.506702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.760 [2024-12-14 22:32:10.527278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.760 [2024-12-14 22:32:10.527314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.760 [2024-12-14 22:32:10.527321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.760 [2024-12-14 22:32:10.527327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.760 [2024-12-14 22:32:10.527332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.760 [2024-12-14 22:32:10.527822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.760 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.760 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.760 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.760 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.760 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rztsbDaX4n 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.018 [2024-12-14 22:32:10.826444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.018 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.277 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.536 [2024-12-14 22:32:11.211437] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.536 [2024-12-14 22:32:11.211618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.536 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.795 malloc0 00:22:50.795 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.795 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:51.054 [2024-12-14 22:32:11.829126] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rztsbDaX4n': 0100666 00:22:51.054 [2024-12-14 22:32:11.829149] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:51.054 request: 00:22:51.054 { 00:22:51.054 "name": "key0", 00:22:51.054 "path": "/tmp/tmp.rztsbDaX4n", 00:22:51.054 "method": "keyring_file_add_key", 00:22:51.054 "req_id": 1 00:22:51.054 } 00:22:51.054 Got JSON-RPC error response 00:22:51.054 response: 00:22:51.054 { 00:22:51.054 "code": -1, 00:22:51.054 "message": "Operation not permitted" 00:22:51.054 } 00:22:51.054 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.314 [2024-12-14 22:32:12.021638] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:51.314 [2024-12-14 22:32:12.021669] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:51.314 request: 00:22:51.314 { 00:22:51.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.314 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.314 "psk": "key0", 00:22:51.314 "method": "nvmf_subsystem_add_host", 00:22:51.314 "req_id": 1 00:22:51.314 } 00:22:51.314 Got JSON-RPC error response 00:22:51.314 response: 00:22:51.314 { 00:22:51.314 "code": -32603, 00:22:51.314 "message": "Internal error" 00:22:51.314 } 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 353853 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353853 ']' 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353853 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353853 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353853' 00:22:51.314 killing process with pid 353853 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353853 00:22:51.314 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353853 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rztsbDaX4n 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354121 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354121 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354121 ']' 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.574 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.574 [2024-12-14 22:32:12.323973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:51.574 [2024-12-14 22:32:12.324016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.574 [2024-12-14 22:32:12.396892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.574 [2024-12-14 22:32:12.417097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.574 [2024-12-14 22:32:12.417130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.575 [2024-12-14 22:32:12.417136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.575 [2024-12-14 22:32:12.417142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.575 [2024-12-14 22:32:12.417147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.575 [2024-12-14 22:32:12.417633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rztsbDaX4n 00:22:51.834 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.094 [2024-12-14 22:32:12.723845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.094 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.094 22:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.353 [2024-12-14 22:32:13.116855] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.353 [2024-12-14 22:32:13.117053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.353 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.612 malloc0 00:22:52.612 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.874 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:52.874 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=354368 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 354368 /var/tmp/bdevperf.sock 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354368 ']' 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.133 22:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.133 [2024-12-14 22:32:13.964010] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:53.133 [2024-12-14 22:32:13.964059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354368 ] 00:22:53.391 [2024-12-14 22:32:14.042156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.391 [2024-12-14 22:32:14.064434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.391 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.391 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.391 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:22:53.650 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.909 [2024-12-14 22:32:14.539420] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.909 TLSTESTn1 00:22:53.909 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:54.169 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:54.169 "subsystems": [ 00:22:54.169 { 00:22:54.169 "subsystem": "keyring", 00:22:54.169 "config": [ 00:22:54.169 { 00:22:54.169 "method": "keyring_file_add_key", 00:22:54.169 "params": { 00:22:54.169 "name": "key0", 00:22:54.169 "path": "/tmp/tmp.rztsbDaX4n" 00:22:54.169 } 00:22:54.169 } 00:22:54.169 ] 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "subsystem": "iobuf", 00:22:54.169 "config": [ 00:22:54.169 { 00:22:54.169 "method": "iobuf_set_options", 00:22:54.169 "params": { 00:22:54.169 "small_pool_count": 8192, 00:22:54.169 "large_pool_count": 1024, 00:22:54.169 "small_bufsize": 8192, 00:22:54.169 "large_bufsize": 135168, 00:22:54.169 "enable_numa": false 00:22:54.169 } 00:22:54.169 } 00:22:54.169 ] 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "subsystem": "sock", 00:22:54.169 "config": [ 00:22:54.169 { 00:22:54.169 "method": "sock_set_default_impl", 00:22:54.169 "params": { 00:22:54.169 "impl_name": "posix" 00:22:54.169 } 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "method": "sock_impl_set_options", 00:22:54.169 "params": { 00:22:54.169 "impl_name": "ssl", 00:22:54.169 "recv_buf_size": 4096, 00:22:54.169 "send_buf_size": 4096, 00:22:54.169 "enable_recv_pipe": true, 00:22:54.169 "enable_quickack": false, 00:22:54.169 "enable_placement_id": 0, 00:22:54.169 "enable_zerocopy_send_server": true, 00:22:54.169 "enable_zerocopy_send_client": false, 00:22:54.169 "zerocopy_threshold": 0, 00:22:54.169 "tls_version": 0, 00:22:54.169 "enable_ktls": false 00:22:54.169 } 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "method": "sock_impl_set_options", 00:22:54.169 "params": { 00:22:54.169 "impl_name": "posix", 00:22:54.169 "recv_buf_size": 2097152, 00:22:54.169 "send_buf_size": 2097152, 00:22:54.169 "enable_recv_pipe": true, 00:22:54.169 "enable_quickack": false, 00:22:54.169 "enable_placement_id": 0, 00:22:54.169 "enable_zerocopy_send_server": true, 00:22:54.169 "enable_zerocopy_send_client": false, 00:22:54.169 "zerocopy_threshold": 0, 00:22:54.169 "tls_version": 0, 00:22:54.169 "enable_ktls": false 00:22:54.169 } 00:22:54.169 } 00:22:54.169 ] 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "subsystem": "vmd", 00:22:54.169 "config": [] 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "subsystem": "accel", 00:22:54.169 "config": [ 00:22:54.169 { 00:22:54.169 "method": "accel_set_options", 00:22:54.169 "params": { 00:22:54.169 "small_cache_size": 128, 00:22:54.169 "large_cache_size": 16, 00:22:54.169 "task_count": 2048, 00:22:54.169 "sequence_count": 2048, 00:22:54.169 "buf_count": 2048 00:22:54.169 } 00:22:54.169 } 00:22:54.169 ] 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "subsystem": "bdev", 00:22:54.169 "config": [ 00:22:54.169 { 00:22:54.169 "method": "bdev_set_options", 00:22:54.169 "params": { 00:22:54.169 "bdev_io_pool_size": 65535, 00:22:54.169 "bdev_io_cache_size": 256, 00:22:54.169 "bdev_auto_examine": true, 00:22:54.169 "iobuf_small_cache_size": 128, 00:22:54.169 "iobuf_large_cache_size": 16 00:22:54.169 } 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "method": "bdev_raid_set_options", 00:22:54.169 "params": { 00:22:54.169 "process_window_size_kb": 1024, 00:22:54.169 "process_max_bandwidth_mb_sec": 0 00:22:54.169 } 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "method": "bdev_iscsi_set_options", 00:22:54.169 "params": { 00:22:54.169 "timeout_sec": 30 00:22:54.169 } 00:22:54.169 }, 00:22:54.169 { 00:22:54.169 "method": "bdev_nvme_set_options", 00:22:54.169 "params": { 00:22:54.169 "action_on_timeout": "none", 00:22:54.169 "timeout_us": 0, 00:22:54.169 "timeout_admin_us": 0, 00:22:54.169 "keep_alive_timeout_ms": 10000, 00:22:54.169 "arbitration_burst": 0, 00:22:54.169 "low_priority_weight": 0, 00:22:54.169 "medium_priority_weight": 0, 00:22:54.169 "high_priority_weight": 0, 00:22:54.169 "nvme_adminq_poll_period_us": 10000, 00:22:54.169 "nvme_ioq_poll_period_us": 0, 00:22:54.169 "io_queue_requests": 0, 00:22:54.169 "delay_cmd_submit": true, 00:22:54.169 "transport_retry_count": 4, 00:22:54.169 "bdev_retry_count": 3, 00:22:54.169 "transport_ack_timeout": 0, 00:22:54.169 "ctrlr_loss_timeout_sec": 0, 00:22:54.169 "reconnect_delay_sec": 0, 00:22:54.169 "fast_io_fail_timeout_sec": 0, 00:22:54.169 "disable_auto_failback": false, 00:22:54.169 "generate_uuids": false, 00:22:54.170 "transport_tos": 0, 00:22:54.170 "nvme_error_stat": false, 00:22:54.170 "rdma_srq_size": 0, 00:22:54.170 "io_path_stat": false, 00:22:54.170 "allow_accel_sequence": false, 00:22:54.170 "rdma_max_cq_size": 0, 00:22:54.170 "rdma_cm_event_timeout_ms": 0, 00:22:54.170 "dhchap_digests": [ 00:22:54.170 "sha256", 00:22:54.170 "sha384", 00:22:54.170 "sha512" 00:22:54.170 ], 00:22:54.170 "dhchap_dhgroups": [ 00:22:54.170 "null", 00:22:54.170 "ffdhe2048", 00:22:54.170 "ffdhe3072", 00:22:54.170 "ffdhe4096", 00:22:54.170 "ffdhe6144", 00:22:54.170 "ffdhe8192" 00:22:54.170 ], 00:22:54.170 "rdma_umr_per_io": false 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "bdev_nvme_set_hotplug", 00:22:54.170 "params": { 00:22:54.170 "period_us": 100000, 00:22:54.170 "enable": false 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "bdev_malloc_create", 00:22:54.170 "params": { 00:22:54.170 "name": "malloc0", 00:22:54.170 "num_blocks": 8192, 00:22:54.170 "block_size": 4096, 00:22:54.170 "physical_block_size": 4096, 00:22:54.170 "uuid": "e4bb649e-42e9-4986-9444-b0ebd380a4f6", 00:22:54.170 "optimal_io_boundary": 0, 00:22:54.170 "md_size": 0, 00:22:54.170 "dif_type": 0, 00:22:54.170 "dif_is_head_of_md": false, 00:22:54.170 "dif_pi_format": 0 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "bdev_wait_for_examine" 00:22:54.170 } 00:22:54.170 ] 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "subsystem": "nbd", 00:22:54.170 "config": [] 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "subsystem": "scheduler", 00:22:54.170 "config": [ 00:22:54.170 { 00:22:54.170 "method": "framework_set_scheduler", 00:22:54.170 "params": { 00:22:54.170 "name": "static" 00:22:54.170 } 00:22:54.170 } 00:22:54.170 ] 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "subsystem": "nvmf", 00:22:54.170 "config": [ 00:22:54.170 { 00:22:54.170 "method": "nvmf_set_config", 00:22:54.170 "params": { 00:22:54.170 "discovery_filter": "match_any", 00:22:54.170 "admin_cmd_passthru": { 00:22:54.170 "identify_ctrlr": false 00:22:54.170 }, 00:22:54.170 "dhchap_digests": [ 00:22:54.170 "sha256", 00:22:54.170 "sha384", 00:22:54.170 "sha512" 00:22:54.170 ], 00:22:54.170 "dhchap_dhgroups": [ 00:22:54.170 "null", 00:22:54.170 "ffdhe2048", 00:22:54.170 "ffdhe3072", 00:22:54.170 "ffdhe4096", 00:22:54.170 "ffdhe6144", 00:22:54.170 "ffdhe8192" 00:22:54.170 ] 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_set_max_subsystems", 00:22:54.170 "params": { 00:22:54.170 "max_subsystems": 1024 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_set_crdt", 00:22:54.170 "params": { 00:22:54.170 "crdt1": 0, 00:22:54.170 "crdt2": 0, 00:22:54.170 "crdt3": 0 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_create_transport", 00:22:54.170 "params": { 00:22:54.170 "trtype": "TCP", 00:22:54.170 "max_queue_depth": 128, 00:22:54.170 "max_io_qpairs_per_ctrlr": 127, 00:22:54.170 "in_capsule_data_size": 4096, 00:22:54.170 "max_io_size": 131072, 00:22:54.170 "io_unit_size": 131072, 00:22:54.170 "max_aq_depth": 128, 00:22:54.170 "num_shared_buffers": 511, 00:22:54.170 "buf_cache_size": 4294967295, 00:22:54.170 "dif_insert_or_strip": false, 00:22:54.170 "zcopy": false, 00:22:54.170 "c2h_success": false, 00:22:54.170 "sock_priority": 0, 00:22:54.170 "abort_timeout_sec": 1, 00:22:54.170 "ack_timeout": 0, 00:22:54.170 "data_wr_pool_size": 0 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_create_subsystem", 00:22:54.170 "params": { 00:22:54.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.170 "allow_any_host": false, 00:22:54.170 "serial_number": "SPDK00000000000001", 00:22:54.170 "model_number": "SPDK bdev Controller", 00:22:54.170 "max_namespaces": 10, 00:22:54.170 "min_cntlid": 1, 00:22:54.170 "max_cntlid": 65519, 00:22:54.170 "ana_reporting": false 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_subsystem_add_host", 00:22:54.170 "params": { 00:22:54.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.170 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.170 "psk": "key0" 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_subsystem_add_ns", 00:22:54.170 "params": { 00:22:54.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.170 "namespace": { 00:22:54.170 "nsid": 1, 00:22:54.170 "bdev_name": "malloc0", 00:22:54.170 "nguid": "E4BB649E42E949869444B0EBD380A4F6", 00:22:54.170 "uuid": "e4bb649e-42e9-4986-9444-b0ebd380a4f6", 00:22:54.170 "no_auto_visible": false 00:22:54.170 } 00:22:54.170 } 00:22:54.170 }, 00:22:54.170 { 00:22:54.170 "method": "nvmf_subsystem_add_listener", 00:22:54.170 "params": { 00:22:54.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.171 "listen_address": { 00:22:54.171 "trtype": "TCP", 00:22:54.171 "adrfam": "IPv4", 00:22:54.171 "traddr": "10.0.0.2", 00:22:54.171 "trsvcid": "4420" 00:22:54.171 }, 00:22:54.171 "secure_channel": true 00:22:54.171 } 00:22:54.171 } 00:22:54.171 ] 00:22:54.171 } 00:22:54.171 ] 00:22:54.171 }' 00:22:54.171 22:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:54.436 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:54.436 "subsystems": [ 00:22:54.436 { 00:22:54.436 "subsystem": "keyring", 00:22:54.436 "config": [ 00:22:54.436 { 00:22:54.436 "method": "keyring_file_add_key", 00:22:54.436 "params": { 00:22:54.436 "name": "key0", 00:22:54.436 "path": "/tmp/tmp.rztsbDaX4n" 00:22:54.436 } 00:22:54.436 } 00:22:54.436 ] 00:22:54.436 }, 00:22:54.436 { 00:22:54.436 "subsystem": "iobuf", 00:22:54.436 "config": [ 00:22:54.436 { 00:22:54.436 "method": "iobuf_set_options", 00:22:54.436 "params": { 00:22:54.436 "small_pool_count": 8192, 00:22:54.436 "large_pool_count": 1024, 00:22:54.436 "small_bufsize": 8192, 00:22:54.436 "large_bufsize": 135168, 00:22:54.436 "enable_numa": false 00:22:54.436 } 00:22:54.436 } 00:22:54.436 ] 00:22:54.436 }, 00:22:54.436 { 00:22:54.436 "subsystem": "sock", 00:22:54.436 "config": [ 00:22:54.436 { 00:22:54.436 "method": "sock_set_default_impl", 00:22:54.436 "params": { 00:22:54.436 "impl_name": "posix" 00:22:54.436 } 00:22:54.436 }, 00:22:54.436 { 00:22:54.436 "method": "sock_impl_set_options", 00:22:54.436 "params": { 00:22:54.436 "impl_name": "ssl", 00:22:54.436 "recv_buf_size": 4096, 00:22:54.436 "send_buf_size": 4096, 00:22:54.436 "enable_recv_pipe": true, 00:22:54.436 "enable_quickack": false, 00:22:54.436 "enable_placement_id": 0, 00:22:54.436 "enable_zerocopy_send_server": true, 00:22:54.436 "enable_zerocopy_send_client": false, 00:22:54.436 "zerocopy_threshold": 0, 00:22:54.436 "tls_version": 0, 00:22:54.436 "enable_ktls": false 00:22:54.436 } 00:22:54.436 }, 00:22:54.436 { 00:22:54.436 "method": "sock_impl_set_options", 00:22:54.436 "params": { 00:22:54.436 "impl_name": "posix", 00:22:54.437 "recv_buf_size": 2097152, 00:22:54.437 "send_buf_size": 2097152, 00:22:54.437 "enable_recv_pipe": true, 00:22:54.437 "enable_quickack": false, 00:22:54.437 "enable_placement_id": 0, 00:22:54.437 "enable_zerocopy_send_server": true, 00:22:54.437 "enable_zerocopy_send_client": false, 00:22:54.437 "zerocopy_threshold": 0, 00:22:54.437 "tls_version": 0, 00:22:54.437 "enable_ktls": false 00:22:54.437 } 00:22:54.437 } 00:22:54.437 ] 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "subsystem": "vmd", 00:22:54.437 "config": [] 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "subsystem": "accel", 00:22:54.437 "config": [ 00:22:54.437 { 00:22:54.437 "method": "accel_set_options", 00:22:54.437 "params": { 00:22:54.437 "small_cache_size": 128, 00:22:54.437 "large_cache_size": 16, 00:22:54.437 "task_count": 2048, 00:22:54.437 "sequence_count": 2048, 00:22:54.437 "buf_count": 2048 00:22:54.437 } 00:22:54.437 } 00:22:54.437 ] 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "subsystem": "bdev", 00:22:54.437 "config": [ 00:22:54.437 { 00:22:54.437 "method": "bdev_set_options", 00:22:54.437 "params": { 00:22:54.437 "bdev_io_pool_size": 65535, 00:22:54.437 "bdev_io_cache_size": 256, 00:22:54.437 "bdev_auto_examine": true, 00:22:54.437 "iobuf_small_cache_size": 128, 00:22:54.437 "iobuf_large_cache_size": 16 00:22:54.437 } 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "method": "bdev_raid_set_options", 00:22:54.437 "params": { 00:22:54.437 "process_window_size_kb": 1024, 00:22:54.437 "process_max_bandwidth_mb_sec": 0 00:22:54.437 } 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "method": "bdev_iscsi_set_options", 00:22:54.437 "params": { 00:22:54.437 "timeout_sec": 30 00:22:54.437 } 00:22:54.437 }, 00:22:54.437 { 00:22:54.437 "method": "bdev_nvme_set_options", 00:22:54.437 "params": { 00:22:54.437 "action_on_timeout": "none", 00:22:54.437 "timeout_us": 0, 00:22:54.437 "timeout_admin_us": 0, 00:22:54.437 "keep_alive_timeout_ms": 10000, 00:22:54.437 "arbitration_burst": 0, 00:22:54.437 "low_priority_weight": 0, 00:22:54.437 "medium_priority_weight": 0, 00:22:54.437 "high_priority_weight": 0, 00:22:54.437 "nvme_adminq_poll_period_us": 10000, 00:22:54.437 "nvme_ioq_poll_period_us": 0, 00:22:54.437 "io_queue_requests": 512, 00:22:54.437 "delay_cmd_submit": true, 00:22:54.437 "transport_retry_count": 4, 00:22:54.437 "bdev_retry_count": 3, 00:22:54.437 "transport_ack_timeout": 0, 00:22:54.437 "ctrlr_loss_timeout_sec": 0, 00:22:54.437 "reconnect_delay_sec": 0, 00:22:54.437 "fast_io_fail_timeout_sec": 0, 00:22:54.437 "disable_auto_failback": false, 00:22:54.437 "generate_uuids": false, 00:22:54.437 "transport_tos": 0, 00:22:54.437 "nvme_error_stat": false, 00:22:54.437 "rdma_srq_size": 0, 00:22:54.437 "io_path_stat": false, 00:22:54.437 "allow_accel_sequence": false, 00:22:54.437 "rdma_max_cq_size": 0, 00:22:54.437 "rdma_cm_event_timeout_ms": 0, 00:22:54.437 "dhchap_digests": [ 00:22:54.437 "sha256", 00:22:54.437 "sha384", 00:22:54.437 "sha512" 00:22:54.437 ], 00:22:54.437 "dhchap_dhgroups": [ 00:22:54.437 "null", 00:22:54.437 "ffdhe2048", 00:22:54.437 "ffdhe3072", 00:22:54.437 "ffdhe4096", 00:22:54.437 "ffdhe6144", 00:22:54.437 "ffdhe8192" 00:22:54.437 ], 00:22:54.438 "rdma_umr_per_io": false 00:22:54.438 } 00:22:54.438 }, 00:22:54.438 { 00:22:54.438 "method": "bdev_nvme_attach_controller", 00:22:54.438 "params": { 00:22:54.438 "name": "TLSTEST", 00:22:54.438 "trtype": "TCP", 00:22:54.438 "adrfam": "IPv4", 00:22:54.438 "traddr": "10.0.0.2", 00:22:54.438 "trsvcid": "4420", 00:22:54.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.438 "prchk_reftag": false, 00:22:54.438 "prchk_guard": false, 00:22:54.438 "ctrlr_loss_timeout_sec": 0, 00:22:54.438 "reconnect_delay_sec": 0, 00:22:54.438 "fast_io_fail_timeout_sec": 0, 00:22:54.438 "psk": "key0", 00:22:54.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.438 "hdgst": false, 00:22:54.438 "ddgst": false, 00:22:54.438 "multipath": "multipath" 00:22:54.438 } 00:22:54.438 }, 00:22:54.438 { 00:22:54.438 "method": "bdev_nvme_set_hotplug", 00:22:54.438 "params": { 00:22:54.438 "period_us": 100000, 00:22:54.438 "enable": false 00:22:54.438 } 00:22:54.438 }, 00:22:54.438 { 00:22:54.438 "method": "bdev_wait_for_examine" 00:22:54.438 } 00:22:54.438 ] 00:22:54.438 }, 00:22:54.438 { 00:22:54.438 "subsystem": "nbd", 00:22:54.438 "config": [] 00:22:54.438 } 00:22:54.438 ] 00:22:54.438 }' 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 354368 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354368 ']' 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354368 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354368 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354368' 00:22:54.438 killing process with pid 354368 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354368 00:22:54.438 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.438 00:22:54.438 Latency(us) 00:22:54.438 [2024-12-14T21:32:15.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.438 [2024-12-14T21:32:15.322Z] =================================================================================================================== 00:22:54.438 [2024-12-14T21:32:15.322Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.438 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354368 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354121 ']' 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354121' 00:22:54.699 killing process with pid 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354121 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.699 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:54.699 "subsystems": [ 00:22:54.699 { 00:22:54.699 "subsystem": "keyring", 00:22:54.699 "config": [ 00:22:54.699 { 00:22:54.699 "method": "keyring_file_add_key", 00:22:54.699 "params": { 00:22:54.699 "name": "key0", 00:22:54.699 "path": "/tmp/tmp.rztsbDaX4n" 00:22:54.699 } 00:22:54.699 } 00:22:54.699 ] 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "subsystem": "iobuf", 00:22:54.700 "config": [ 00:22:54.700 { 00:22:54.700 "method": "iobuf_set_options", 00:22:54.700 "params": { 00:22:54.700 "small_pool_count": 8192, 00:22:54.700 "large_pool_count": 1024, 00:22:54.700 "small_bufsize": 8192, 00:22:54.700 "large_bufsize": 135168, 00:22:54.700 "enable_numa": false 00:22:54.700 } 00:22:54.700 } 00:22:54.700 ] 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "subsystem": "sock", 00:22:54.700 "config": [ 00:22:54.700 { 00:22:54.700 "method": "sock_set_default_impl", 00:22:54.700 "params": { 00:22:54.700 "impl_name": "posix" 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "sock_impl_set_options", 00:22:54.700 "params": { 00:22:54.700 "impl_name": "ssl", 00:22:54.700 "recv_buf_size": 4096, 00:22:54.700 "send_buf_size": 4096, 00:22:54.700 "enable_recv_pipe": true, 00:22:54.700 "enable_quickack": false, 00:22:54.700 "enable_placement_id": 0, 00:22:54.700 "enable_zerocopy_send_server": true, 00:22:54.700 "enable_zerocopy_send_client": false, 00:22:54.700 "zerocopy_threshold": 0, 00:22:54.700 "tls_version": 0, 00:22:54.700 "enable_ktls": false 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "sock_impl_set_options", 00:22:54.700 "params": { 00:22:54.700 "impl_name": "posix", 00:22:54.700 "recv_buf_size": 2097152, 00:22:54.700 "send_buf_size": 2097152, 00:22:54.700 "enable_recv_pipe": true, 00:22:54.700 "enable_quickack": false, 00:22:54.700 "enable_placement_id": 0, 00:22:54.700 "enable_zerocopy_send_server": true, 00:22:54.700 "enable_zerocopy_send_client": false, 00:22:54.700 "zerocopy_threshold": 0, 00:22:54.700 "tls_version": 0, 00:22:54.700 "enable_ktls": false 00:22:54.700 } 00:22:54.700 } 00:22:54.700 ] 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "subsystem": "vmd", 00:22:54.700 "config": [] 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "subsystem": "accel", 00:22:54.700 "config": [ 00:22:54.700 { 00:22:54.700 "method": "accel_set_options", 00:22:54.700 "params": { 00:22:54.700 "small_cache_size": 128, 00:22:54.700 "large_cache_size": 16, 00:22:54.700 "task_count": 2048, 00:22:54.700 "sequence_count": 2048, 00:22:54.700 "buf_count": 2048 00:22:54.700 } 00:22:54.700 } 00:22:54.700 ] 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "subsystem": "bdev", 00:22:54.700 "config": [ 00:22:54.700 { 00:22:54.700 "method": "bdev_set_options", 00:22:54.700 "params": { 00:22:54.700 "bdev_io_pool_size": 65535, 00:22:54.700 "bdev_io_cache_size": 256, 00:22:54.700 "bdev_auto_examine": true, 00:22:54.700 "iobuf_small_cache_size": 128, 00:22:54.700 "iobuf_large_cache_size": 16 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "bdev_raid_set_options", 00:22:54.700 "params": { 00:22:54.700 "process_window_size_kb": 1024, 00:22:54.700 "process_max_bandwidth_mb_sec": 0 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "bdev_iscsi_set_options", 00:22:54.700 "params": { 00:22:54.700 "timeout_sec": 30 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "bdev_nvme_set_options", 00:22:54.700 "params": { 00:22:54.700 "action_on_timeout": "none", 00:22:54.700 "timeout_us": 0, 00:22:54.700 "timeout_admin_us": 0, 00:22:54.700 "keep_alive_timeout_ms": 10000, 00:22:54.700 "arbitration_burst": 0, 00:22:54.700 "low_priority_weight": 0, 00:22:54.700 "medium_priority_weight": 0, 00:22:54.700 "high_priority_weight": 0, 00:22:54.700 "nvme_adminq_poll_period_us": 10000, 00:22:54.700 "nvme_ioq_poll_period_us": 0, 00:22:54.700 "io_queue_requests": 0, 00:22:54.700 "delay_cmd_submit": true, 00:22:54.700 "transport_retry_count": 4, 00:22:54.700 "bdev_retry_count": 3, 00:22:54.700 "transport_ack_timeout": 0, 00:22:54.700 "ctrlr_loss_timeout_sec": 0, 00:22:54.700 "reconnect_delay_sec": 0, 00:22:54.700 "fast_io_fail_timeout_sec": 0, 00:22:54.700 "disable_auto_failback": false, 00:22:54.700 "generate_uuids": false, 00:22:54.700 "transport_tos": 0, 00:22:54.700 "nvme_error_stat": false, 00:22:54.700 "rdma_srq_size": 0, 00:22:54.700 "io_path_stat": false, 00:22:54.700 "allow_accel_sequence": false, 00:22:54.700 "rdma_max_cq_size": 0, 00:22:54.700 "rdma_cm_event_timeout_ms": 0, 00:22:54.700 "dhchap_digests": [ 00:22:54.700 "sha256", 00:22:54.700 "sha384", 00:22:54.700 "sha512" 00:22:54.700 ], 00:22:54.700 "dhchap_dhgroups": [ 00:22:54.700 "null", 00:22:54.700 "ffdhe2048", 00:22:54.700 "ffdhe3072", 00:22:54.700 "ffdhe4096", 00:22:54.700 "ffdhe6144", 00:22:54.700 "ffdhe8192" 00:22:54.700 ], 00:22:54.700 "rdma_umr_per_io": false 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "bdev_nvme_set_hotplug", 00:22:54.700 "params": { 00:22:54.700 "period_us": 100000, 00:22:54.700 "enable": false 00:22:54.700 } 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "method": "bdev_malloc_create", 00:22:54.700 "params": { 00:22:54.700 "name": "malloc0", 00:22:54.700 "num_blocks": 8192, 00:22:54.700 "block_size": 4096, 00:22:54.700 "physical_block_size": 4096, 00:22:54.700 "uuid": "e4bb649e-42e9-4986-9444-b0ebd380a4f6", 00:22:54.700 "optimal_io_boundary": 0, 00:22:54.700 "md_size": 0, 00:22:54.700 "dif_type": 0, 00:22:54.701 "dif_is_head_of_md": false, 00:22:54.701 "dif_pi_format": 0 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "bdev_wait_for_examine" 00:22:54.701 } 00:22:54.701 ] 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "subsystem": "nbd", 00:22:54.701 "config": [] 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "subsystem": "scheduler", 00:22:54.701 "config": [ 00:22:54.701 { 00:22:54.701 "method": "framework_set_scheduler", 00:22:54.701 "params": { 00:22:54.701 "name": "static" 00:22:54.701 } 00:22:54.701 } 00:22:54.701 ] 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "subsystem": "nvmf", 00:22:54.701 "config": [ 00:22:54.701 { 00:22:54.701 "method": "nvmf_set_config", 00:22:54.701 "params": { 00:22:54.701 "discovery_filter": "match_any", 00:22:54.701 "admin_cmd_passthru": { 00:22:54.701 "identify_ctrlr": false 00:22:54.701 }, 00:22:54.701 "dhchap_digests": [ 00:22:54.701 "sha256", 00:22:54.701 "sha384", 00:22:54.701 "sha512" 00:22:54.701 ], 00:22:54.701 "dhchap_dhgroups": [ 00:22:54.701 "null", 00:22:54.701 "ffdhe2048", 00:22:54.701 "ffdhe3072", 00:22:54.701 "ffdhe4096", 00:22:54.701 "ffdhe6144", 00:22:54.701 "ffdhe8192" 00:22:54.701 ] 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_set_max_subsystems", 00:22:54.701 "params": { 00:22:54.701 "max_subsystems": 1024 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_set_crdt", 00:22:54.701 "params": { 00:22:54.701 "crdt1": 0, 00:22:54.701 "crdt2": 0, 00:22:54.701 "crdt3": 0 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_create_transport", 00:22:54.701 "params": { 00:22:54.701 "trtype": "TCP", 00:22:54.701 "max_queue_depth": 128, 00:22:54.701 "max_io_qpairs_per_ctrlr": 127, 00:22:54.701 "in_capsule_data_size": 4096, 00:22:54.701 "max_io_size": 131072, 00:22:54.701 "io_unit_size": 131072, 00:22:54.701 "max_aq_depth": 128, 00:22:54.701 "num_shared_buffers": 511, 00:22:54.701 "buf_cache_size": 4294967295, 00:22:54.701 "dif_insert_or_strip": false, 00:22:54.701 "zcopy": false, 00:22:54.701 "c2h_success": false, 00:22:54.701 "sock_priority": 0, 00:22:54.701 "abort_timeout_sec": 1, 00:22:54.701 "ack_timeout": 0, 00:22:54.701 "data_wr_pool_size": 0 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_create_subsystem", 00:22:54.701 "params": { 00:22:54.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.701 "allow_any_host": false, 00:22:54.701 "serial_number": "SPDK00000000000001", 00:22:54.701 "model_number": "SPDK bdev Controller", 00:22:54.701 "max_namespaces": 10, 00:22:54.701 "min_cntlid": 1, 00:22:54.701 "max_cntlid": 65519, 00:22:54.701 "ana_reporting": false 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_subsystem_add_host", 00:22:54.701 "params": { 00:22:54.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.701 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.701 "psk": "key0" 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_subsystem_add_ns", 00:22:54.701 "params": { 00:22:54.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.701 "namespace": { 00:22:54.701 "nsid": 1, 00:22:54.701 "bdev_name": "malloc0", 00:22:54.701 "nguid": "E4BB649E42E949869444B0EBD380A4F6", 00:22:54.701 "uuid": "e4bb649e-42e9-4986-9444-b0ebd380a4f6", 00:22:54.701 "no_auto_visible": false 00:22:54.701 } 00:22:54.701 } 00:22:54.701 }, 00:22:54.701 { 00:22:54.701 "method": "nvmf_subsystem_add_listener", 00:22:54.701 "params": { 00:22:54.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.701 "listen_address": { 00:22:54.701 "trtype": "TCP", 00:22:54.701 "adrfam": "IPv4", 00:22:54.701 "traddr": "10.0.0.2", 00:22:54.701 "trsvcid": "4420" 00:22:54.701 }, 00:22:54.701 "secure_channel": true 00:22:54.701 } 00:22:54.701 } 00:22:54.701 ] 00:22:54.701 } 00:22:54.701 ] 00:22:54.701 }' 00:22:54.701 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354719 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354719 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354719 ']' 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.961 22:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.961 [2024-12-14 22:32:15.626215] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:54.961 [2024-12-14 22:32:15.626261] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.961 [2024-12-14 22:32:15.700292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.961 [2024-12-14 22:32:15.720278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.961 [2024-12-14 22:32:15.720313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.961 [2024-12-14 22:32:15.720320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.961 [2024-12-14 22:32:15.720325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.961 [2024-12-14 22:32:15.720330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.961 [2024-12-14 22:32:15.720852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.221 [2024-12-14 22:32:15.928684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.221 [2024-12-14 22:32:15.960712] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.221 [2024-12-14 22:32:15.960928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=354852 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 354852 /var/tmp/bdevperf.sock 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354852 ']' 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.790 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:55.790 "subsystems": [ 00:22:55.790 { 00:22:55.790 "subsystem": "keyring", 00:22:55.790 "config": [ 00:22:55.790 { 00:22:55.790 "method": "keyring_file_add_key", 00:22:55.790 "params": { 00:22:55.790 "name": "key0", 00:22:55.790 "path": "/tmp/tmp.rztsbDaX4n" 00:22:55.790 } 00:22:55.790 } 00:22:55.790 ] 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "subsystem": "iobuf", 00:22:55.790 "config": [ 00:22:55.790 { 00:22:55.790 "method": "iobuf_set_options", 00:22:55.790 "params": { 00:22:55.790 "small_pool_count": 8192, 00:22:55.790 "large_pool_count": 1024, 00:22:55.790 "small_bufsize": 8192, 00:22:55.790 "large_bufsize": 135168, 00:22:55.790 "enable_numa": false 00:22:55.790 } 00:22:55.790 } 00:22:55.790 ] 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "subsystem": "sock", 00:22:55.790 "config": [ 00:22:55.790 { 00:22:55.790 "method": "sock_set_default_impl", 00:22:55.790 "params": { 00:22:55.790 "impl_name": "posix" 00:22:55.790 } 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "method": "sock_impl_set_options", 00:22:55.790 "params": { 00:22:55.790 "impl_name": "ssl", 00:22:55.790 "recv_buf_size": 4096, 00:22:55.790 "send_buf_size": 4096, 00:22:55.790 "enable_recv_pipe": true, 00:22:55.790 "enable_quickack": false, 00:22:55.790 "enable_placement_id": 0, 00:22:55.790 "enable_zerocopy_send_server": true, 00:22:55.790 "enable_zerocopy_send_client": false, 00:22:55.790 "zerocopy_threshold": 0, 00:22:55.790 "tls_version": 0, 00:22:55.790 "enable_ktls": false 00:22:55.790 } 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "method": "sock_impl_set_options", 00:22:55.790 "params": { 00:22:55.790 "impl_name": "posix", 00:22:55.790 "recv_buf_size": 2097152, 00:22:55.790 "send_buf_size": 2097152, 00:22:55.790 "enable_recv_pipe": true, 00:22:55.790 "enable_quickack": false, 00:22:55.790 "enable_placement_id": 0, 00:22:55.790 "enable_zerocopy_send_server": true, 00:22:55.790 "enable_zerocopy_send_client": false, 00:22:55.790 "zerocopy_threshold": 0, 00:22:55.790 "tls_version": 0, 00:22:55.790 "enable_ktls": false 00:22:55.790 } 00:22:55.790 } 00:22:55.790 ] 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "subsystem": "vmd", 00:22:55.790 "config": [] 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "subsystem": "accel", 00:22:55.790 "config": [ 00:22:55.790 { 00:22:55.790 "method": "accel_set_options", 00:22:55.790 "params": { 00:22:55.790 "small_cache_size": 128, 00:22:55.790 "large_cache_size": 16, 00:22:55.790 "task_count": 2048, 00:22:55.790 "sequence_count": 2048, 00:22:55.790 "buf_count": 2048 00:22:55.790 } 00:22:55.790 } 00:22:55.790 ] 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "subsystem": "bdev", 00:22:55.790 "config": [ 00:22:55.790 { 00:22:55.790 "method": "bdev_set_options", 00:22:55.790 "params": { 00:22:55.790 "bdev_io_pool_size": 65535, 00:22:55.790 "bdev_io_cache_size": 256, 00:22:55.790 "bdev_auto_examine": true, 00:22:55.790 "iobuf_small_cache_size": 128, 00:22:55.790 "iobuf_large_cache_size": 16 00:22:55.790 } 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "method": "bdev_raid_set_options", 00:22:55.790 "params": { 00:22:55.790 "process_window_size_kb": 1024, 00:22:55.790 "process_max_bandwidth_mb_sec": 0 00:22:55.790 } 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "method": "bdev_iscsi_set_options", 00:22:55.790 "params": { 00:22:55.790 "timeout_sec": 30 00:22:55.790 } 00:22:55.790 }, 00:22:55.790 { 00:22:55.790 "method": "bdev_nvme_set_options", 00:22:55.790 "params": { 00:22:55.790 "action_on_timeout": "none", 00:22:55.790 "timeout_us": 0, 00:22:55.790 "timeout_admin_us": 0, 00:22:55.790 "keep_alive_timeout_ms": 10000, 00:22:55.790 "arbitration_burst": 0, 00:22:55.790 "low_priority_weight": 0, 00:22:55.790 "medium_priority_weight": 0, 00:22:55.790 "high_priority_weight": 0, 00:22:55.790 "nvme_adminq_poll_period_us": 10000, 00:22:55.790 "nvme_ioq_poll_period_us": 0, 00:22:55.790 "io_queue_requests": 512, 00:22:55.790 "delay_cmd_submit": true, 00:22:55.790 "transport_retry_count": 4, 00:22:55.790 "bdev_retry_count": 3, 00:22:55.790 "transport_ack_timeout": 0, 00:22:55.790 "ctrlr_loss_timeout_sec": 0, 00:22:55.790 "reconnect_delay_sec": 0, 00:22:55.790 "fast_io_fail_timeout_sec": 0, 00:22:55.790 "disable_auto_failback": false, 00:22:55.790 "generate_uuids": false, 00:22:55.791 "transport_tos": 0, 00:22:55.791 "nvme_error_stat": false, 00:22:55.791 "rdma_srq_size": 0, 00:22:55.791 "io_path_stat": false, 00:22:55.791 "allow_accel_sequence": false, 00:22:55.791 "rdma_max_cq_size": 0, 00:22:55.791 "rdma_cm_event_timeout_ms": 0, 00:22:55.791 "dhchap_digests": [ 00:22:55.791 "sha256", 00:22:55.791 "sha384", 00:22:55.791 "sha512" 00:22:55.791 ], 00:22:55.791 "dhchap_dhgroups": [ 00:22:55.791 "null", 00:22:55.791 "ffdhe2048", 00:22:55.791 "ffdhe3072", 00:22:55.791 "ffdhe4096", 00:22:55.791 "ffdhe6144", 00:22:55.791 "ffdhe8192" 00:22:55.791 ], 00:22:55.791 "rdma_umr_per_io": false 00:22:55.791 } 00:22:55.791 }, 00:22:55.791 { 00:22:55.791 "method": "bdev_nvme_attach_controller", 00:22:55.791 "params": { 00:22:55.791 "name": "TLSTEST", 00:22:55.791 "trtype": "TCP", 00:22:55.791 "adrfam": "IPv4", 00:22:55.791 "traddr": "10.0.0.2", 00:22:55.791 "trsvcid": "4420", 00:22:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.791 "prchk_reftag": false, 00:22:55.791 "prchk_guard": false, 00:22:55.791 "ctrlr_loss_timeout_sec": 0, 00:22:55.791 "reconnect_delay_sec": 0, 00:22:55.791 "fast_io_fail_timeout_sec": 0, 00:22:55.791 "psk": "key0", 00:22:55.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.791 "hdgst": false, 00:22:55.791 "ddgst": false, 00:22:55.791 "multipath": "multipath" 00:22:55.791 } 00:22:55.791 }, 00:22:55.791 { 00:22:55.791 "method": "bdev_nvme_set_hotplug", 00:22:55.791 "params": { 00:22:55.791 "period_us": 100000, 00:22:55.791 "enable": false 00:22:55.791 } 00:22:55.791 }, 00:22:55.791 { 00:22:55.791 "method": "bdev_wait_for_examine" 00:22:55.791 } 00:22:55.791 ] 00:22:55.791 }, 00:22:55.791 { 00:22:55.791 "subsystem": "nbd", 00:22:55.791 "config": [] 00:22:55.791 } 00:22:55.791 ] 00:22:55.791 }' 00:22:55.791 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.791 22:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.791 [2024-12-14 22:32:16.551976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:55.791 [2024-12-14 22:32:16.552025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354852 ] 00:22:55.791 [2024-12-14 22:32:16.627418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.791 [2024-12-14 22:32:16.649599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.050 [2024-12-14 22:32:16.797228] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.619 22:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.619 22:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.619 22:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.619 Running I/O for 10 seconds... 00:22:58.935 5303.00 IOPS, 20.71 MiB/s [2024-12-14T21:32:20.756Z] 5307.00 IOPS, 20.73 MiB/s [2024-12-14T21:32:21.692Z] 5303.33 IOPS, 20.72 MiB/s [2024-12-14T21:32:22.628Z] 5330.25 IOPS, 20.82 MiB/s [2024-12-14T21:32:23.565Z] 5384.40 IOPS, 21.03 MiB/s [2024-12-14T21:32:24.501Z] 5460.00 IOPS, 21.33 MiB/s [2024-12-14T21:32:25.876Z] 5491.57 IOPS, 21.45 MiB/s [2024-12-14T21:32:26.811Z] 5519.25 IOPS, 21.56 MiB/s [2024-12-14T21:32:27.746Z] 5486.56 IOPS, 21.43 MiB/s [2024-12-14T21:32:27.746Z] 5463.40 IOPS, 21.34 MiB/s 00:23:06.862 Latency(us) 00:23:06.862 [2024-12-14T21:32:27.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.862 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.862 Verification LBA range: start 0x0 length 0x2000 00:23:06.862 TLSTESTn1 : 10.03 5458.67 21.32 0.00 0.00 23399.14 4993.22 37948.46 00:23:06.862 [2024-12-14T21:32:27.746Z] =================================================================================================================== 00:23:06.862 [2024-12-14T21:32:27.746Z] Total : 5458.67 21.32 0.00 0.00 23399.14 4993.22 37948.46 00:23:06.862 { 00:23:06.862 "results": [ 00:23:06.862 { 00:23:06.862 "job": "TLSTESTn1", 00:23:06.862 "core_mask": "0x4", 00:23:06.862 "workload": "verify", 00:23:06.862 "status": "finished", 00:23:06.862 "verify_range": { 00:23:06.862 "start": 0, 00:23:06.862 "length": 8192 00:23:06.862 }, 00:23:06.862 "queue_depth": 128, 00:23:06.862 "io_size": 4096, 00:23:06.862 "runtime": 10.032122, 00:23:06.862 "iops": 5458.665674121587, 00:23:06.862 "mibps": 21.322912789537448, 00:23:06.862 "io_failed": 0, 00:23:06.862 "io_timeout": 0, 00:23:06.862 "avg_latency_us": 23399.140751407387, 00:23:06.862 "min_latency_us": 4993.219047619048, 00:23:06.862 "max_latency_us": 37948.46476190476 00:23:06.862 } 00:23:06.862 ], 00:23:06.862 "core_count": 1 00:23:06.862 } 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 354852 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354852 ']' 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354852 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354852 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354852' 00:23:06.862 killing process with pid 354852 00:23:06.862 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354852 00:23:06.862 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.862 00:23:06.862 Latency(us) 00:23:06.862 [2024-12-14T21:32:27.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.863 [2024-12-14T21:32:27.747Z] =================================================================================================================== 00:23:06.863 [2024-12-14T21:32:27.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.863 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354852 00:23:07.121 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354719 ']' 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354719' 00:23:07.122 killing process with pid 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354719 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=356643 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 356643 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356643 ']' 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.122 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.380 [2024-12-14 22:32:28.032445] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:07.381 [2024-12-14 22:32:28.032495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.381 [2024-12-14 22:32:28.109413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.381 [2024-12-14 22:32:28.130294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.381 [2024-12-14 22:32:28.130330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.381 [2024-12-14 22:32:28.130337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.381 [2024-12-14 22:32:28.130342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.381 [2024-12-14 22:32:28.130347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.381 [2024-12-14 22:32:28.130866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.381 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.381 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.381 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.381 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.381 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.639 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.639 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rztsbDaX4n 00:23:07.639 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rztsbDaX4n 00:23:07.639 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.639 [2024-12-14 22:32:28.442128] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.639 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.897 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.155 [2024-12-14 22:32:28.811069] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.155 [2024-12-14 22:32:28.811270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.155 22:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.155 malloc0 00:23:08.155 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.413 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:23:08.671 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=357004 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 357004 /var/tmp/bdevperf.sock 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357004 ']' 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.930 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 [2024-12-14 22:32:29.658538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:08.930 [2024-12-14 22:32:29.658599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357004 ] 00:23:08.930 [2024-12-14 22:32:29.730593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.930 [2024-12-14 22:32:29.752339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.188 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.188 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.188 22:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:23:09.188 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:09.446 [2024-12-14 22:32:30.211794] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.446 nvme0n1 00:23:09.446 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.704 Running I/O for 1 seconds... 00:23:10.639 4475.00 IOPS, 17.48 MiB/s 00:23:10.639 Latency(us) 00:23:10.639 [2024-12-14T21:32:31.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.639 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:10.639 Verification LBA range: start 0x0 length 0x2000 00:23:10.639 nvme0n1 : 1.02 4524.74 17.67 0.00 0.00 28039.60 5648.58 27837.20 00:23:10.639 [2024-12-14T21:32:31.523Z] =================================================================================================================== 00:23:10.639 [2024-12-14T21:32:31.523Z] Total : 4524.74 17.67 0.00 0.00 28039.60 5648.58 27837.20 00:23:10.639 { 00:23:10.639 "results": [ 00:23:10.639 { 00:23:10.639 "job": "nvme0n1", 00:23:10.639 "core_mask": "0x2", 00:23:10.639 "workload": "verify", 00:23:10.639 "status": "finished", 00:23:10.639 "verify_range": { 00:23:10.639 "start": 0, 00:23:10.639 "length": 8192 00:23:10.639 }, 00:23:10.639 "queue_depth": 128, 00:23:10.639 "io_size": 4096, 00:23:10.639 "runtime": 1.017297, 00:23:10.639 "iops": 4524.735647505104, 00:23:10.639 "mibps": 17.67474862306681, 00:23:10.639 "io_failed": 0, 00:23:10.639 "io_timeout": 0, 00:23:10.639 "avg_latency_us": 28039.598244623074, 00:23:10.639 "min_latency_us": 5648.579047619048, 00:23:10.639 "max_latency_us": 27837.196190476192 00:23:10.639 } 00:23:10.639 ], 00:23:10.639 "core_count": 1 00:23:10.639 } 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 357004 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357004 ']' 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357004 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357004 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357004' 00:23:10.640 killing process with pid 357004 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357004 00:23:10.640 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.640 00:23:10.640 Latency(us) 00:23:10.640 [2024-12-14T21:32:31.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.640 [2024-12-14T21:32:31.524Z] =================================================================================================================== 00:23:10.640 [2024-12-14T21:32:31.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.640 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357004 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 356643 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356643 ']' 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356643 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356643 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356643' 00:23:10.898 killing process with pid 356643 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356643 00:23:10.898 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356643 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357346 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357346 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357346 ']' 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.157 22:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.157 [2024-12-14 22:32:31.915262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:11.157 [2024-12-14 22:32:31.915314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.157 [2024-12-14 22:32:31.989820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.157 [2024-12-14 22:32:32.007750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.157 [2024-12-14 22:32:32.007787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.157 [2024-12-14 22:32:32.007794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.157 [2024-12-14 22:32:32.007799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.157 [2024-12-14 22:32:32.007805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.157 [2024-12-14 22:32:32.008333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.416 [2024-12-14 22:32:32.147129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.416 malloc0 00:23:11.416 [2024-12-14 22:32:32.175188] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.416 [2024-12-14 22:32:32.175403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.416 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=357377 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 357377 /var/tmp/bdevperf.sock 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357377 ']' 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.417 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.417 [2024-12-14 22:32:32.249139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:11.417 [2024-12-14 22:32:32.249190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357377 ] 00:23:11.676 [2024-12-14 22:32:32.323408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.676 [2024-12-14 22:32:32.345592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.676 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.676 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.676 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rztsbDaX4n 00:23:11.935 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.935 [2024-12-14 22:32:32.776386] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.194 nvme0n1 00:23:12.194 22:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.194 Running I/O for 1 seconds... 00:23:13.134 5009.00 IOPS, 19.57 MiB/s 00:23:13.134 Latency(us) 00:23:13.134 [2024-12-14T21:32:34.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.134 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:13.134 Verification LBA range: start 0x0 length 0x2000 00:23:13.134 nvme0n1 : 1.01 5072.67 19.82 0.00 0.00 25069.48 4930.80 34952.53 00:23:13.134 [2024-12-14T21:32:34.018Z] =================================================================================================================== 00:23:13.134 [2024-12-14T21:32:34.018Z] Total : 5072.67 19.82 0.00 0.00 25069.48 4930.80 34952.53 00:23:13.134 { 00:23:13.134 "results": [ 00:23:13.134 { 00:23:13.134 "job": "nvme0n1", 00:23:13.134 "core_mask": "0x2", 00:23:13.134 "workload": "verify", 00:23:13.134 "status": "finished", 00:23:13.134 "verify_range": { 00:23:13.134 "start": 0, 00:23:13.134 "length": 8192 00:23:13.134 }, 00:23:13.134 "queue_depth": 128, 00:23:13.134 "io_size": 4096, 00:23:13.134 "runtime": 1.012682, 00:23:13.134 "iops": 5072.668419108862, 00:23:13.134 "mibps": 19.81511101214399, 00:23:13.134 "io_failed": 0, 00:23:13.134 "io_timeout": 0, 00:23:13.134 "avg_latency_us": 25069.480727865994, 00:23:13.134 "min_latency_us": 4930.80380952381, 00:23:13.134 "max_latency_us": 34952.53333333333 00:23:13.134 } 00:23:13.134 ], 00:23:13.134 "core_count": 1 00:23:13.134 } 00:23:13.134 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:13.134 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.134 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.392 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.392 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:13.392 "subsystems": [ 00:23:13.392 { 00:23:13.392 "subsystem": "keyring", 00:23:13.392 "config": [ 00:23:13.392 { 00:23:13.392 "method": "keyring_file_add_key", 00:23:13.392 "params": { 00:23:13.392 "name": "key0", 00:23:13.392 "path": "/tmp/tmp.rztsbDaX4n" 00:23:13.392 } 00:23:13.392 } 00:23:13.392 ] 00:23:13.392 }, 00:23:13.392 { 00:23:13.392 "subsystem": "iobuf", 00:23:13.392 "config": [ 00:23:13.392 { 00:23:13.392 "method": "iobuf_set_options", 00:23:13.393 "params": { 00:23:13.393 "small_pool_count": 8192, 00:23:13.393 "large_pool_count": 1024, 00:23:13.393 "small_bufsize": 8192, 00:23:13.393 "large_bufsize": 135168, 00:23:13.393 "enable_numa": false 00:23:13.393 } 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "sock", 00:23:13.393 "config": [ 00:23:13.393 { 00:23:13.393 "method": "sock_set_default_impl", 00:23:13.393 "params": { 00:23:13.393 "impl_name": "posix" 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "sock_impl_set_options", 00:23:13.393 "params": { 00:23:13.393 "impl_name": "ssl", 00:23:13.393 "recv_buf_size": 4096, 00:23:13.393 "send_buf_size": 4096, 00:23:13.393 "enable_recv_pipe": true, 00:23:13.393 "enable_quickack": false, 00:23:13.393 "enable_placement_id": 0, 00:23:13.393 "enable_zerocopy_send_server": true, 00:23:13.393 "enable_zerocopy_send_client": false, 00:23:13.393 "zerocopy_threshold": 0, 00:23:13.393 "tls_version": 0, 00:23:13.393 "enable_ktls": false 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "sock_impl_set_options", 00:23:13.393 "params": { 00:23:13.393 "impl_name": "posix", 00:23:13.393 "recv_buf_size": 2097152, 00:23:13.393 "send_buf_size": 2097152, 00:23:13.393 "enable_recv_pipe": true, 00:23:13.393 "enable_quickack": false, 00:23:13.393 "enable_placement_id": 0, 00:23:13.393 "enable_zerocopy_send_server": true, 00:23:13.393 "enable_zerocopy_send_client": false, 00:23:13.393 "zerocopy_threshold": 0, 00:23:13.393 "tls_version": 0, 00:23:13.393 "enable_ktls": false 00:23:13.393 } 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "vmd", 00:23:13.393 "config": [] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "accel", 00:23:13.393 "config": [ 00:23:13.393 { 00:23:13.393 "method": "accel_set_options", 00:23:13.393 "params": { 00:23:13.393 "small_cache_size": 128, 00:23:13.393 "large_cache_size": 16, 00:23:13.393 "task_count": 2048, 00:23:13.393 "sequence_count": 2048, 00:23:13.393 "buf_count": 2048 00:23:13.393 } 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "bdev", 00:23:13.393 "config": [ 00:23:13.393 { 00:23:13.393 "method": "bdev_set_options", 00:23:13.393 "params": { 00:23:13.393 "bdev_io_pool_size": 65535, 00:23:13.393 "bdev_io_cache_size": 256, 00:23:13.393 "bdev_auto_examine": true, 00:23:13.393 "iobuf_small_cache_size": 128, 00:23:13.393 "iobuf_large_cache_size": 16 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_raid_set_options", 00:23:13.393 "params": { 00:23:13.393 "process_window_size_kb": 1024, 00:23:13.393 "process_max_bandwidth_mb_sec": 0 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_iscsi_set_options", 00:23:13.393 "params": { 00:23:13.393 "timeout_sec": 30 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_nvme_set_options", 00:23:13.393 "params": { 00:23:13.393 "action_on_timeout": "none", 00:23:13.393 "timeout_us": 0, 00:23:13.393 "timeout_admin_us": 0, 00:23:13.393 "keep_alive_timeout_ms": 10000, 00:23:13.393 "arbitration_burst": 0, 00:23:13.393 "low_priority_weight": 0, 00:23:13.393 "medium_priority_weight": 0, 00:23:13.393 "high_priority_weight": 0, 00:23:13.393 "nvme_adminq_poll_period_us": 10000, 00:23:13.393 "nvme_ioq_poll_period_us": 0, 00:23:13.393 "io_queue_requests": 0, 00:23:13.393 "delay_cmd_submit": true, 00:23:13.393 "transport_retry_count": 4, 00:23:13.393 "bdev_retry_count": 3, 00:23:13.393 "transport_ack_timeout": 0, 00:23:13.393 "ctrlr_loss_timeout_sec": 0, 00:23:13.393 "reconnect_delay_sec": 0, 00:23:13.393 "fast_io_fail_timeout_sec": 0, 00:23:13.393 "disable_auto_failback": false, 00:23:13.393 "generate_uuids": false, 00:23:13.393 "transport_tos": 0, 00:23:13.393 "nvme_error_stat": false, 00:23:13.393 "rdma_srq_size": 0, 00:23:13.393 "io_path_stat": false, 00:23:13.393 "allow_accel_sequence": false, 00:23:13.393 "rdma_max_cq_size": 0, 00:23:13.393 "rdma_cm_event_timeout_ms": 0, 00:23:13.393 "dhchap_digests": [ 00:23:13.393 "sha256", 00:23:13.393 "sha384", 00:23:13.393 "sha512" 00:23:13.393 ], 00:23:13.393 "dhchap_dhgroups": [ 00:23:13.393 "null", 00:23:13.393 "ffdhe2048", 00:23:13.393 "ffdhe3072", 00:23:13.393 "ffdhe4096", 00:23:13.393 "ffdhe6144", 00:23:13.393 "ffdhe8192" 00:23:13.393 ], 00:23:13.393 "rdma_umr_per_io": false 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_nvme_set_hotplug", 00:23:13.393 "params": { 00:23:13.393 "period_us": 100000, 00:23:13.393 "enable": false 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_malloc_create", 00:23:13.393 "params": { 00:23:13.393 "name": "malloc0", 00:23:13.393 "num_blocks": 8192, 00:23:13.393 "block_size": 4096, 00:23:13.393 "physical_block_size": 4096, 00:23:13.393 "uuid": "6d8d89e3-f657-4a07-b5c1-4b0042528d60", 00:23:13.393 "optimal_io_boundary": 0, 00:23:13.393 "md_size": 0, 00:23:13.393 "dif_type": 0, 00:23:13.393 "dif_is_head_of_md": false, 00:23:13.393 "dif_pi_format": 0 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "bdev_wait_for_examine" 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "nbd", 00:23:13.393 "config": [] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "scheduler", 00:23:13.393 "config": [ 00:23:13.393 { 00:23:13.393 "method": "framework_set_scheduler", 00:23:13.393 "params": { 00:23:13.393 "name": "static" 00:23:13.393 } 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "subsystem": "nvmf", 00:23:13.393 "config": [ 00:23:13.393 { 00:23:13.393 "method": "nvmf_set_config", 00:23:13.393 "params": { 00:23:13.393 "discovery_filter": "match_any", 00:23:13.393 "admin_cmd_passthru": { 00:23:13.393 "identify_ctrlr": false 00:23:13.393 }, 00:23:13.393 "dhchap_digests": [ 00:23:13.393 "sha256", 00:23:13.393 "sha384", 00:23:13.393 "sha512" 00:23:13.393 ], 00:23:13.393 "dhchap_dhgroups": [ 00:23:13.393 "null", 00:23:13.393 "ffdhe2048", 00:23:13.393 "ffdhe3072", 00:23:13.393 "ffdhe4096", 00:23:13.393 "ffdhe6144", 00:23:13.393 "ffdhe8192" 00:23:13.393 ] 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_set_max_subsystems", 00:23:13.393 "params": { 00:23:13.393 "max_subsystems": 1024 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_set_crdt", 00:23:13.393 "params": { 00:23:13.393 "crdt1": 0, 00:23:13.393 "crdt2": 0, 00:23:13.393 "crdt3": 0 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_create_transport", 00:23:13.393 "params": { 00:23:13.393 "trtype": "TCP", 00:23:13.393 "max_queue_depth": 128, 00:23:13.393 "max_io_qpairs_per_ctrlr": 127, 00:23:13.393 "in_capsule_data_size": 4096, 00:23:13.393 "max_io_size": 131072, 00:23:13.393 "io_unit_size": 131072, 00:23:13.393 "max_aq_depth": 128, 00:23:13.393 "num_shared_buffers": 511, 00:23:13.393 "buf_cache_size": 4294967295, 00:23:13.393 "dif_insert_or_strip": false, 00:23:13.393 "zcopy": false, 00:23:13.393 "c2h_success": false, 00:23:13.393 "sock_priority": 0, 00:23:13.393 "abort_timeout_sec": 1, 00:23:13.393 "ack_timeout": 0, 00:23:13.393 "data_wr_pool_size": 0 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_create_subsystem", 00:23:13.393 "params": { 00:23:13.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.393 "allow_any_host": false, 00:23:13.393 "serial_number": "00000000000000000000", 00:23:13.393 "model_number": "SPDK bdev Controller", 00:23:13.393 "max_namespaces": 32, 00:23:13.393 "min_cntlid": 1, 00:23:13.393 "max_cntlid": 65519, 00:23:13.393 "ana_reporting": false 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_subsystem_add_host", 00:23:13.393 "params": { 00:23:13.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.393 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.393 "psk": "key0" 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_subsystem_add_ns", 00:23:13.393 "params": { 00:23:13.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.393 "namespace": { 00:23:13.393 "nsid": 1, 00:23:13.393 "bdev_name": "malloc0", 00:23:13.393 "nguid": "6D8D89E3F6574A07B5C14B0042528D60", 00:23:13.393 "uuid": "6d8d89e3-f657-4a07-b5c1-4b0042528d60", 00:23:13.393 "no_auto_visible": false 00:23:13.393 } 00:23:13.393 } 00:23:13.393 }, 00:23:13.393 { 00:23:13.393 "method": "nvmf_subsystem_add_listener", 00:23:13.393 "params": { 00:23:13.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.393 "listen_address": { 00:23:13.393 "trtype": "TCP", 00:23:13.393 "adrfam": "IPv4", 00:23:13.393 "traddr": "10.0.0.2", 00:23:13.393 "trsvcid": "4420" 00:23:13.393 }, 00:23:13.393 "secure_channel": false, 00:23:13.393 "sock_impl": "ssl" 00:23:13.393 } 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 } 00:23:13.393 ] 00:23:13.393 }' 00:23:13.393 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:13.653 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:13.653 "subsystems": [ 00:23:13.653 { 00:23:13.653 "subsystem": "keyring", 00:23:13.653 "config": [ 00:23:13.653 { 00:23:13.653 "method": "keyring_file_add_key", 00:23:13.653 "params": { 00:23:13.653 "name": "key0", 00:23:13.653 "path": "/tmp/tmp.rztsbDaX4n" 00:23:13.653 } 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "iobuf", 00:23:13.653 "config": [ 00:23:13.653 { 00:23:13.653 "method": "iobuf_set_options", 00:23:13.653 "params": { 00:23:13.653 "small_pool_count": 8192, 00:23:13.653 "large_pool_count": 1024, 00:23:13.653 "small_bufsize": 8192, 00:23:13.653 "large_bufsize": 135168, 00:23:13.653 "enable_numa": false 00:23:13.653 } 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "sock", 00:23:13.653 "config": [ 00:23:13.653 { 00:23:13.653 "method": "sock_set_default_impl", 00:23:13.653 "params": { 00:23:13.653 "impl_name": "posix" 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "sock_impl_set_options", 00:23:13.653 "params": { 00:23:13.653 "impl_name": "ssl", 00:23:13.653 "recv_buf_size": 4096, 00:23:13.653 "send_buf_size": 4096, 00:23:13.653 "enable_recv_pipe": true, 00:23:13.653 "enable_quickack": false, 00:23:13.653 "enable_placement_id": 0, 00:23:13.653 "enable_zerocopy_send_server": true, 00:23:13.653 "enable_zerocopy_send_client": false, 00:23:13.653 "zerocopy_threshold": 0, 00:23:13.653 "tls_version": 0, 00:23:13.653 "enable_ktls": false 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "sock_impl_set_options", 00:23:13.653 "params": { 00:23:13.653 "impl_name": "posix", 00:23:13.653 "recv_buf_size": 2097152, 00:23:13.653 "send_buf_size": 2097152, 00:23:13.653 "enable_recv_pipe": true, 00:23:13.653 "enable_quickack": false, 00:23:13.653 "enable_placement_id": 0, 00:23:13.653 "enable_zerocopy_send_server": true, 00:23:13.653 "enable_zerocopy_send_client": false, 00:23:13.653 "zerocopy_threshold": 0, 00:23:13.653 "tls_version": 0, 00:23:13.653 "enable_ktls": false 00:23:13.653 } 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "vmd", 00:23:13.653 "config": [] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "accel", 00:23:13.653 "config": [ 00:23:13.653 { 00:23:13.653 "method": "accel_set_options", 00:23:13.653 "params": { 00:23:13.653 "small_cache_size": 128, 00:23:13.653 "large_cache_size": 16, 00:23:13.653 "task_count": 2048, 00:23:13.653 "sequence_count": 2048, 00:23:13.653 "buf_count": 2048 00:23:13.653 } 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "bdev", 00:23:13.653 "config": [ 00:23:13.653 { 00:23:13.653 "method": "bdev_set_options", 00:23:13.653 "params": { 00:23:13.653 "bdev_io_pool_size": 65535, 00:23:13.653 "bdev_io_cache_size": 256, 00:23:13.653 "bdev_auto_examine": true, 00:23:13.653 "iobuf_small_cache_size": 128, 00:23:13.653 "iobuf_large_cache_size": 16 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_raid_set_options", 00:23:13.653 "params": { 00:23:13.653 "process_window_size_kb": 1024, 00:23:13.653 "process_max_bandwidth_mb_sec": 0 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_iscsi_set_options", 00:23:13.653 "params": { 00:23:13.653 "timeout_sec": 30 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_nvme_set_options", 00:23:13.653 "params": { 00:23:13.653 "action_on_timeout": "none", 00:23:13.653 "timeout_us": 0, 00:23:13.653 "timeout_admin_us": 0, 00:23:13.653 "keep_alive_timeout_ms": 10000, 00:23:13.653 "arbitration_burst": 0, 00:23:13.653 "low_priority_weight": 0, 00:23:13.653 "medium_priority_weight": 0, 00:23:13.653 "high_priority_weight": 0, 00:23:13.653 "nvme_adminq_poll_period_us": 10000, 00:23:13.653 "nvme_ioq_poll_period_us": 0, 00:23:13.653 "io_queue_requests": 512, 00:23:13.653 "delay_cmd_submit": true, 00:23:13.653 "transport_retry_count": 4, 00:23:13.653 "bdev_retry_count": 3, 00:23:13.653 "transport_ack_timeout": 0, 00:23:13.653 "ctrlr_loss_timeout_sec": 0, 00:23:13.653 "reconnect_delay_sec": 0, 00:23:13.653 "fast_io_fail_timeout_sec": 0, 00:23:13.653 "disable_auto_failback": false, 00:23:13.653 "generate_uuids": false, 00:23:13.653 "transport_tos": 0, 00:23:13.653 "nvme_error_stat": false, 00:23:13.653 "rdma_srq_size": 0, 00:23:13.653 "io_path_stat": false, 00:23:13.653 "allow_accel_sequence": false, 00:23:13.653 "rdma_max_cq_size": 0, 00:23:13.653 "rdma_cm_event_timeout_ms": 0, 00:23:13.653 "dhchap_digests": [ 00:23:13.653 "sha256", 00:23:13.653 "sha384", 00:23:13.653 "sha512" 00:23:13.653 ], 00:23:13.653 "dhchap_dhgroups": [ 00:23:13.653 "null", 00:23:13.653 "ffdhe2048", 00:23:13.653 "ffdhe3072", 00:23:13.653 "ffdhe4096", 00:23:13.653 "ffdhe6144", 00:23:13.653 "ffdhe8192" 00:23:13.653 ], 00:23:13.653 "rdma_umr_per_io": false 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_nvme_attach_controller", 00:23:13.653 "params": { 00:23:13.653 "name": "nvme0", 00:23:13.653 "trtype": "TCP", 00:23:13.653 "adrfam": "IPv4", 00:23:13.653 "traddr": "10.0.0.2", 00:23:13.653 "trsvcid": "4420", 00:23:13.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.653 "prchk_reftag": false, 00:23:13.653 "prchk_guard": false, 00:23:13.653 "ctrlr_loss_timeout_sec": 0, 00:23:13.653 "reconnect_delay_sec": 0, 00:23:13.653 "fast_io_fail_timeout_sec": 0, 00:23:13.653 "psk": "key0", 00:23:13.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.653 "hdgst": false, 00:23:13.653 "ddgst": false, 00:23:13.653 "multipath": "multipath" 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_nvme_set_hotplug", 00:23:13.653 "params": { 00:23:13.653 "period_us": 100000, 00:23:13.653 "enable": false 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_enable_histogram", 00:23:13.653 "params": { 00:23:13.653 "name": "nvme0n1", 00:23:13.653 "enable": true 00:23:13.653 } 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "method": "bdev_wait_for_examine" 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }, 00:23:13.653 { 00:23:13.653 "subsystem": "nbd", 00:23:13.653 "config": [] 00:23:13.653 } 00:23:13.653 ] 00:23:13.653 }' 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 357377 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357377 ']' 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357377 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357377 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357377' 00:23:13.654 killing process with pid 357377 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357377 00:23:13.654 Received shutdown signal, test time was about 1.000000 seconds 00:23:13.654 00:23:13.654 Latency(us) 00:23:13.654 [2024-12-14T21:32:34.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.654 [2024-12-14T21:32:34.538Z] =================================================================================================================== 00:23:13.654 [2024-12-14T21:32:34.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.654 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357377 00:23:13.913 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 357346 00:23:13.913 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357346 ']' 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357346 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357346 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357346' 00:23:13.914 killing process with pid 357346 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357346 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357346 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.914 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:13.914 "subsystems": [ 00:23:13.914 { 00:23:13.914 "subsystem": "keyring", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "keyring_file_add_key", 00:23:13.914 "params": { 00:23:13.914 "name": "key0", 00:23:13.914 "path": "/tmp/tmp.rztsbDaX4n" 00:23:13.914 } 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "iobuf", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "iobuf_set_options", 00:23:13.914 "params": { 00:23:13.914 "small_pool_count": 8192, 00:23:13.914 "large_pool_count": 1024, 00:23:13.914 "small_bufsize": 8192, 00:23:13.914 "large_bufsize": 135168, 00:23:13.914 "enable_numa": false 00:23:13.914 } 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "sock", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "sock_set_default_impl", 00:23:13.914 "params": { 00:23:13.914 "impl_name": "posix" 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "sock_impl_set_options", 00:23:13.914 "params": { 00:23:13.914 "impl_name": "ssl", 00:23:13.914 "recv_buf_size": 4096, 00:23:13.914 "send_buf_size": 4096, 00:23:13.914 "enable_recv_pipe": true, 00:23:13.914 "enable_quickack": false, 00:23:13.914 "enable_placement_id": 0, 00:23:13.914 "enable_zerocopy_send_server": true, 00:23:13.914 "enable_zerocopy_send_client": false, 00:23:13.914 "zerocopy_threshold": 0, 00:23:13.914 "tls_version": 0, 00:23:13.914 "enable_ktls": false 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "sock_impl_set_options", 00:23:13.914 "params": { 00:23:13.914 "impl_name": "posix", 00:23:13.914 "recv_buf_size": 2097152, 00:23:13.914 "send_buf_size": 2097152, 00:23:13.914 "enable_recv_pipe": true, 00:23:13.914 "enable_quickack": false, 00:23:13.914 "enable_placement_id": 0, 00:23:13.914 "enable_zerocopy_send_server": true, 00:23:13.914 "enable_zerocopy_send_client": false, 00:23:13.914 "zerocopy_threshold": 0, 00:23:13.914 "tls_version": 0, 00:23:13.914 "enable_ktls": false 00:23:13.914 } 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "vmd", 00:23:13.914 "config": [] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "accel", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "accel_set_options", 00:23:13.914 "params": { 00:23:13.914 "small_cache_size": 128, 00:23:13.914 "large_cache_size": 16, 00:23:13.914 "task_count": 2048, 00:23:13.914 "sequence_count": 2048, 00:23:13.914 "buf_count": 2048 00:23:13.914 } 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "bdev", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "bdev_set_options", 00:23:13.914 "params": { 00:23:13.914 "bdev_io_pool_size": 65535, 00:23:13.914 "bdev_io_cache_size": 256, 00:23:13.914 "bdev_auto_examine": true, 00:23:13.914 "iobuf_small_cache_size": 128, 00:23:13.914 "iobuf_large_cache_size": 16 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_raid_set_options", 00:23:13.914 "params": { 00:23:13.914 "process_window_size_kb": 1024, 00:23:13.914 "process_max_bandwidth_mb_sec": 0 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_iscsi_set_options", 00:23:13.914 "params": { 00:23:13.914 "timeout_sec": 30 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_nvme_set_options", 00:23:13.914 "params": { 00:23:13.914 "action_on_timeout": "none", 00:23:13.914 "timeout_us": 0, 00:23:13.914 "timeout_admin_us": 0, 00:23:13.914 "keep_alive_timeout_ms": 10000, 00:23:13.914 "arbitration_burst": 0, 00:23:13.914 "low_priority_weight": 0, 00:23:13.914 "medium_priority_weight": 0, 00:23:13.914 "high_priority_weight": 0, 00:23:13.914 "nvme_adminq_poll_period_us": 10000, 00:23:13.914 "nvme_ioq_poll_period_us": 0, 00:23:13.914 "io_queue_requests": 0, 00:23:13.914 "delay_cmd_submit": true, 00:23:13.914 "transport_retry_count": 4, 00:23:13.914 "bdev_retry_count": 3, 00:23:13.914 "transport_ack_timeout": 0, 00:23:13.914 "ctrlr_loss_timeout_sec": 0, 00:23:13.914 "reconnect_delay_sec": 0, 00:23:13.914 "fast_io_fail_timeout_sec": 0, 00:23:13.914 "disable_auto_failback": false, 00:23:13.914 "generate_uuids": false, 00:23:13.914 "transport_tos": 0, 00:23:13.914 "nvme_error_stat": false, 00:23:13.914 "rdma_srq_size": 0, 00:23:13.914 "io_path_stat": false, 00:23:13.914 "allow_accel_sequence": false, 00:23:13.914 "rdma_max_cq_size": 0, 00:23:13.914 "rdma_cm_event_timeout_ms": 0, 00:23:13.914 "dhchap_digests": [ 00:23:13.914 "sha256", 00:23:13.914 "sha384", 00:23:13.914 "sha512" 00:23:13.914 ], 00:23:13.914 "dhchap_dhgroups": [ 00:23:13.914 "null", 00:23:13.914 "ffdhe2048", 00:23:13.914 "ffdhe3072", 00:23:13.914 "ffdhe4096", 00:23:13.914 "ffdhe6144", 00:23:13.914 "ffdhe8192" 00:23:13.914 ], 00:23:13.914 "rdma_umr_per_io": false 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_nvme_set_hotplug", 00:23:13.914 "params": { 00:23:13.914 "period_us": 100000, 00:23:13.914 "enable": false 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_malloc_create", 00:23:13.914 "params": { 00:23:13.914 "name": "malloc0", 00:23:13.914 "num_blocks": 8192, 00:23:13.914 "block_size": 4096, 00:23:13.914 "physical_block_size": 4096, 00:23:13.914 "uuid": "6d8d89e3-f657-4a07-b5c1-4b0042528d60", 00:23:13.914 "optimal_io_boundary": 0, 00:23:13.914 "md_size": 0, 00:23:13.914 "dif_type": 0, 00:23:13.914 "dif_is_head_of_md": false, 00:23:13.914 "dif_pi_format": 0 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "bdev_wait_for_examine" 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "nbd", 00:23:13.914 "config": [] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "scheduler", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "framework_set_scheduler", 00:23:13.914 "params": { 00:23:13.914 "name": "static" 00:23:13.914 } 00:23:13.914 } 00:23:13.914 ] 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "subsystem": "nvmf", 00:23:13.914 "config": [ 00:23:13.914 { 00:23:13.914 "method": "nvmf_set_config", 00:23:13.914 "params": { 00:23:13.914 "discovery_filter": "match_any", 00:23:13.914 "admin_cmd_passthru": { 00:23:13.914 "identify_ctrlr": false 00:23:13.914 }, 00:23:13.914 "dhchap_digests": [ 00:23:13.914 "sha256", 00:23:13.914 "sha384", 00:23:13.914 "sha512" 00:23:13.914 ], 00:23:13.914 "dhchap_dhgroups": [ 00:23:13.914 "null", 00:23:13.914 "ffdhe2048", 00:23:13.914 "ffdhe3072", 00:23:13.914 "ffdhe4096", 00:23:13.914 "ffdhe6144", 00:23:13.914 "ffdhe8192" 00:23:13.914 ] 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "nvmf_set_max_subsystems", 00:23:13.914 "params": { 00:23:13.914 "max_subsystems": 1024 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "nvmf_set_crdt", 00:23:13.914 "params": { 00:23:13.914 "crdt1": 0, 00:23:13.914 "crdt2": 0, 00:23:13.914 "crdt3": 0 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "nvmf_create_transport", 00:23:13.914 "params": { 00:23:13.914 "trtype": "TCP", 00:23:13.914 "max_queue_depth": 128, 00:23:13.914 "max_io_qpairs_per_ctrlr": 127, 00:23:13.914 "in_capsule_data_size": 4096, 00:23:13.914 "max_io_size": 131072, 00:23:13.914 "io_unit_size": 131072, 00:23:13.914 "max_aq_depth": 128, 00:23:13.914 "num_shared_buffers": 511, 00:23:13.914 "buf_cache_size": 4294967295, 00:23:13.914 "dif_insert_or_strip": false, 00:23:13.914 "zcopy": false, 00:23:13.914 "c2h_success": false, 00:23:13.914 "sock_priority": 0, 00:23:13.914 "abort_timeout_sec": 1, 00:23:13.914 "ack_timeout": 0, 00:23:13.914 "data_wr_pool_size": 0 00:23:13.914 } 00:23:13.914 }, 00:23:13.914 { 00:23:13.914 "method": "nvmf_create_subsystem", 00:23:13.914 "params": { 00:23:13.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.914 "allow_any_host": false, 00:23:13.914 "serial_number": "00000000000000000000", 00:23:13.914 "model_number": "SPDK bdev Controller", 00:23:13.914 "max_namespaces": 32, 00:23:13.915 "min_cntlid": 1, 00:23:13.915 "max_cntlid": 65519, 00:23:13.915 "ana_reporting": false 00:23:13.915 } 00:23:13.915 }, 00:23:13.915 { 00:23:13.915 "method": "nvmf_subsystem_add_host", 00:23:13.915 "params": { 00:23:13.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.915 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.915 "psk": "key0" 00:23:13.915 } 00:23:13.915 }, 00:23:13.915 { 00:23:13.915 "method": "nvmf_subsystem_add_ns", 00:23:13.915 "params": { 00:23:13.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.915 "namespace": { 00:23:13.915 "nsid": 1, 00:23:13.915 "bdev_name": "malloc0", 00:23:13.915 "nguid": "6D8D89E3F6574A07B5C14B0042528D60", 00:23:13.915 "uuid": "6d8d89e3-f657-4a07-b5c1-4b0042528d60", 00:23:13.915 "no_auto_visible": false 00:23:13.915 } 00:23:13.915 } 00:23:13.915 }, 00:23:13.915 { 00:23:13.915 "method": "nvmf_subsystem_add_listener", 00:23:13.915 "params": { 00:23:13.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.915 "listen_address": { 00:23:13.915 "trtype": "TCP", 00:23:13.915 "adrfam": "IPv4", 00:23:13.915 "traddr": "10.0.0.2", 00:23:13.915 "trsvcid": "4420" 00:23:13.915 }, 00:23:13.915 "secure_channel": false, 00:23:13.915 "sock_impl": "ssl" 00:23:13.915 } 00:23:13.915 } 00:23:13.915 ] 00:23:13.915 } 00:23:13.915 ] 00:23:13.915 }' 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357833 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357833 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357833 ']' 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.915 22:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.174 [2024-12-14 22:32:34.843753] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:14.174 [2024-12-14 22:32:34.843804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.174 [2024-12-14 22:32:34.919509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.174 [2024-12-14 22:32:34.949940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.174 [2024-12-14 22:32:34.949977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.174 [2024-12-14 22:32:34.949984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.174 [2024-12-14 22:32:34.949990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.174 [2024-12-14 22:32:34.949994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.174 [2024-12-14 22:32:34.950527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.436 [2024-12-14 22:32:35.159574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.436 [2024-12-14 22:32:35.191617] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.436 [2024-12-14 22:32:35.191807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=358071 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 358071 /var/tmp/bdevperf.sock 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358071 ']' 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.004 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:15.004 "subsystems": [ 00:23:15.004 { 00:23:15.004 "subsystem": "keyring", 00:23:15.004 "config": [ 00:23:15.004 { 00:23:15.004 "method": "keyring_file_add_key", 00:23:15.004 "params": { 00:23:15.004 "name": "key0", 00:23:15.004 "path": "/tmp/tmp.rztsbDaX4n" 00:23:15.004 } 00:23:15.004 } 00:23:15.004 ] 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "subsystem": "iobuf", 00:23:15.004 "config": [ 00:23:15.004 { 00:23:15.004 "method": "iobuf_set_options", 00:23:15.004 "params": { 00:23:15.004 "small_pool_count": 8192, 00:23:15.004 "large_pool_count": 1024, 00:23:15.004 "small_bufsize": 8192, 00:23:15.004 "large_bufsize": 135168, 00:23:15.004 "enable_numa": false 00:23:15.004 } 00:23:15.004 } 00:23:15.004 ] 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "subsystem": "sock", 00:23:15.004 "config": [ 00:23:15.004 { 00:23:15.004 "method": "sock_set_default_impl", 00:23:15.004 "params": { 00:23:15.004 "impl_name": "posix" 00:23:15.004 } 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "method": "sock_impl_set_options", 00:23:15.004 "params": { 00:23:15.004 "impl_name": "ssl", 00:23:15.004 "recv_buf_size": 4096, 00:23:15.004 "send_buf_size": 4096, 00:23:15.004 "enable_recv_pipe": true, 00:23:15.004 "enable_quickack": false, 00:23:15.004 "enable_placement_id": 0, 00:23:15.004 "enable_zerocopy_send_server": true, 00:23:15.004 "enable_zerocopy_send_client": false, 00:23:15.004 "zerocopy_threshold": 0, 00:23:15.004 "tls_version": 0, 00:23:15.004 "enable_ktls": false 00:23:15.004 } 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "method": "sock_impl_set_options", 00:23:15.004 "params": { 00:23:15.004 "impl_name": "posix", 00:23:15.004 "recv_buf_size": 2097152, 00:23:15.004 "send_buf_size": 2097152, 00:23:15.004 "enable_recv_pipe": true, 00:23:15.004 "enable_quickack": false, 00:23:15.004 "enable_placement_id": 0, 00:23:15.004 "enable_zerocopy_send_server": true, 00:23:15.004 "enable_zerocopy_send_client": false, 00:23:15.004 "zerocopy_threshold": 0, 00:23:15.004 "tls_version": 0, 00:23:15.004 "enable_ktls": false 00:23:15.004 } 00:23:15.004 } 00:23:15.004 ] 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "subsystem": "vmd", 00:23:15.004 "config": [] 00:23:15.004 }, 00:23:15.004 { 00:23:15.004 "subsystem": "accel", 00:23:15.004 "config": [ 00:23:15.004 { 00:23:15.005 "method": "accel_set_options", 00:23:15.005 "params": { 00:23:15.005 "small_cache_size": 128, 00:23:15.005 "large_cache_size": 16, 00:23:15.005 "task_count": 2048, 00:23:15.005 "sequence_count": 2048, 00:23:15.005 "buf_count": 2048 00:23:15.005 } 00:23:15.005 } 00:23:15.005 ] 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "subsystem": "bdev", 00:23:15.005 "config": [ 00:23:15.005 { 00:23:15.005 "method": "bdev_set_options", 00:23:15.005 "params": { 00:23:15.005 "bdev_io_pool_size": 65535, 00:23:15.005 "bdev_io_cache_size": 256, 00:23:15.005 "bdev_auto_examine": true, 00:23:15.005 "iobuf_small_cache_size": 128, 00:23:15.005 "iobuf_large_cache_size": 16 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_raid_set_options", 00:23:15.005 "params": { 00:23:15.005 "process_window_size_kb": 1024, 00:23:15.005 "process_max_bandwidth_mb_sec": 0 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_iscsi_set_options", 00:23:15.005 "params": { 00:23:15.005 "timeout_sec": 30 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_nvme_set_options", 00:23:15.005 "params": { 00:23:15.005 "action_on_timeout": "none", 00:23:15.005 "timeout_us": 0, 00:23:15.005 "timeout_admin_us": 0, 00:23:15.005 "keep_alive_timeout_ms": 10000, 00:23:15.005 "arbitration_burst": 0, 00:23:15.005 "low_priority_weight": 0, 00:23:15.005 "medium_priority_weight": 0, 00:23:15.005 "high_priority_weight": 0, 00:23:15.005 "nvme_adminq_poll_period_us": 10000, 00:23:15.005 "nvme_ioq_poll_period_us": 0, 00:23:15.005 "io_queue_requests": 512, 00:23:15.005 "delay_cmd_submit": true, 00:23:15.005 "transport_retry_count": 4, 00:23:15.005 "bdev_retry_count": 3, 00:23:15.005 "transport_ack_timeout": 0, 00:23:15.005 "ctrlr_loss_timeout_sec": 0, 00:23:15.005 "reconnect_delay_sec": 0, 00:23:15.005 "fast_io_fail_timeout_sec": 0, 00:23:15.005 "disable_auto_failback": false, 00:23:15.005 "generate_uuids": false, 00:23:15.005 "transport_tos": 0, 00:23:15.005 "nvme_error_stat": false, 00:23:15.005 "rdma_srq_size": 0, 00:23:15.005 "io_path_stat": false, 00:23:15.005 "allow_accel_sequence": false, 00:23:15.005 "rdma_max_cq_size": 0, 00:23:15.005 "rdma_cm_event_timeout_ms": 0, 00:23:15.005 "dhchap_digests": [ 00:23:15.005 "sha256", 00:23:15.005 "sha384", 00:23:15.005 "sha512" 00:23:15.005 ], 00:23:15.005 "dhchap_dhgroups": [ 00:23:15.005 "null", 00:23:15.005 "ffdhe2048", 00:23:15.005 "ffdhe3072", 00:23:15.005 "ffdhe4096", 00:23:15.005 "ffdhe6144", 00:23:15.005 "ffdhe8192" 00:23:15.005 ], 00:23:15.005 "rdma_umr_per_io": false 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_nvme_attach_controller", 00:23:15.005 "params": { 00:23:15.005 "name": "nvme0", 00:23:15.005 "trtype": "TCP", 00:23:15.005 "adrfam": "IPv4", 00:23:15.005 "traddr": "10.0.0.2", 00:23:15.005 "trsvcid": "4420", 00:23:15.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.005 "prchk_reftag": false, 00:23:15.005 "prchk_guard": false, 00:23:15.005 "ctrlr_loss_timeout_sec": 0, 00:23:15.005 "reconnect_delay_sec": 0, 00:23:15.005 "fast_io_fail_timeout_sec": 0, 00:23:15.005 "psk": "key0", 00:23:15.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.005 "hdgst": false, 00:23:15.005 "ddgst": false, 00:23:15.005 "multipath": "multipath" 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_nvme_set_hotplug", 00:23:15.005 "params": { 00:23:15.005 "period_us": 100000, 00:23:15.005 "enable": false 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_enable_histogram", 00:23:15.005 "params": { 00:23:15.005 "name": "nvme0n1", 00:23:15.005 "enable": true 00:23:15.005 } 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "method": "bdev_wait_for_examine" 00:23:15.005 } 00:23:15.005 ] 00:23:15.005 }, 00:23:15.005 { 00:23:15.005 "subsystem": "nbd", 00:23:15.005 "config": [] 00:23:15.005 } 00:23:15.005 ] 00:23:15.005 }' 00:23:15.005 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.005 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.005 [2024-12-14 22:32:35.747146] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:15.005 [2024-12-14 22:32:35.747193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358071 ] 00:23:15.005 [2024-12-14 22:32:35.821019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.005 [2024-12-14 22:32:35.842642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.264 [2024-12-14 22:32:35.991170] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.830 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.830 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.830 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.830 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:16.089 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.089 22:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.089 Running I/O for 1 seconds... 00:23:17.283 4320.00 IOPS, 16.88 MiB/s 00:23:17.283 Latency(us) 00:23:17.283 [2024-12-14T21:32:38.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.283 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:17.283 Verification LBA range: start 0x0 length 0x2000 00:23:17.283 nvme0n1 : 1.02 4340.54 16.96 0.00 0.00 29218.96 5960.66 87381.33 00:23:17.283 [2024-12-14T21:32:38.167Z] =================================================================================================================== 00:23:17.283 [2024-12-14T21:32:38.167Z] Total : 4340.54 16.96 0.00 0.00 29218.96 5960.66 87381.33 00:23:17.283 { 00:23:17.283 "results": [ 00:23:17.283 { 00:23:17.283 "job": "nvme0n1", 00:23:17.283 "core_mask": "0x2", 00:23:17.283 "workload": "verify", 00:23:17.283 "status": "finished", 00:23:17.283 "verify_range": { 00:23:17.283 "start": 0, 00:23:17.283 "length": 8192 00:23:17.283 }, 00:23:17.283 "queue_depth": 128, 00:23:17.283 "io_size": 4096, 00:23:17.283 "runtime": 1.024758, 00:23:17.283 "iops": 4340.536985317509, 00:23:17.283 "mibps": 16.95522259889652, 00:23:17.283 "io_failed": 0, 00:23:17.283 "io_timeout": 0, 00:23:17.283 "avg_latency_us": 29218.957725248372, 00:23:17.283 "min_latency_us": 5960.655238095238, 00:23:17.283 "max_latency_us": 87381.33333333333 00:23:17.283 } 00:23:17.283 ], 00:23:17.283 "core_count": 1 00:23:17.283 } 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:17.283 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:17.283 nvmf_trace.0 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 358071 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358071 ']' 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358071 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358071 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358071' 00:23:17.283 killing process with pid 358071 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358071 00:23:17.283 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.283 00:23:17.283 Latency(us) 00:23:17.283 [2024-12-14T21:32:38.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.283 [2024-12-14T21:32:38.167Z] =================================================================================================================== 00:23:17.283 [2024-12-14T21:32:38.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.283 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358071 00:23:17.542 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:17.542 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.542 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.543 rmmod nvme_tcp 00:23:17.543 rmmod nvme_fabrics 00:23:17.543 rmmod nvme_keyring 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 357833 ']' 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 357833 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357833 ']' 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357833 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357833 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357833' 00:23:17.543 killing process with pid 357833 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357833 00:23:17.543 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357833 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.802 22:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.706 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.706 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZaZq7rodYm /tmp/tmp.qn5yopYETN /tmp/tmp.rztsbDaX4n 00:23:19.965 00:23:19.965 real 1m18.962s 00:23:19.965 user 2m2.347s 00:23:19.965 sys 0m28.997s 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.965 ************************************ 00:23:19.965 END TEST nvmf_tls 00:23:19.965 ************************************ 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:19.965 ************************************ 00:23:19.965 START TEST nvmf_fips 00:23:19.965 ************************************ 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:19.965 * Looking for test storage... 00:23:19.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:19.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.965 --rc genhtml_branch_coverage=1 00:23:19.965 --rc genhtml_function_coverage=1 00:23:19.965 --rc genhtml_legend=1 00:23:19.965 --rc geninfo_all_blocks=1 00:23:19.965 --rc geninfo_unexecuted_blocks=1 00:23:19.965 00:23:19.965 ' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:19.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.965 --rc genhtml_branch_coverage=1 00:23:19.965 --rc genhtml_function_coverage=1 00:23:19.965 --rc genhtml_legend=1 00:23:19.965 --rc geninfo_all_blocks=1 00:23:19.965 --rc geninfo_unexecuted_blocks=1 00:23:19.965 00:23:19.965 ' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:19.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.965 --rc genhtml_branch_coverage=1 00:23:19.965 --rc genhtml_function_coverage=1 00:23:19.965 --rc genhtml_legend=1 00:23:19.965 --rc geninfo_all_blocks=1 00:23:19.965 --rc geninfo_unexecuted_blocks=1 00:23:19.965 00:23:19.965 ' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:19.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.965 --rc genhtml_branch_coverage=1 00:23:19.965 --rc genhtml_function_coverage=1 00:23:19.965 --rc genhtml_legend=1 00:23:19.965 --rc geninfo_all_blocks=1 00:23:19.965 --rc geninfo_unexecuted_blocks=1 00:23:19.965 00:23:19.965 ' 00:23:19.965 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.966 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:19.966 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.966 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.966 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:20.224 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:20.225 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:20.225 Error setting digest 00:23:20.225 4002E1C7167F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:20.225 4002E1C7167F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.225 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.484 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.484 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.484 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.484 22:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.047 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.047 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.047 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.047 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.047 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:27.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:27.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:27.048 Found net devices under 0000:af:00.0: cvl_0_0 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:27.048 Found net devices under 0000:af:00.1: cvl_0_1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.048 22:32:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:23:27.048 00:23:27.048 --- 10.0.0.2 ping statistics --- 00:23:27.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.048 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:27.048 00:23:27.048 --- 10.0.0.1 ping statistics --- 00:23:27.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.048 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.048 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=362016 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 362016 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362016 ']' 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.049 [2024-12-14 22:32:47.136960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:27.049 [2024-12-14 22:32:47.137028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.049 [2024-12-14 22:32:47.214841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.049 [2024-12-14 22:32:47.235011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.049 [2024-12-14 22:32:47.235046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.049 [2024-12-14 22:32:47.235053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.049 [2024-12-14 22:32:47.235059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.049 [2024-12-14 22:32:47.235064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.049 [2024-12-14 22:32:47.235567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.esd 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.esd 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.esd 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.esd 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.049 [2024-12-14 22:32:47.546012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.049 [2024-12-14 22:32:47.562019] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.049 [2024-12-14 22:32:47.562202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.049 malloc0 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=362049 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 362049 /var/tmp/bdevperf.sock 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362049 ']' 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.049 [2024-12-14 22:32:47.688871] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:27.049 [2024-12-14 22:32:47.688923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362049 ] 00:23:27.049 [2024-12-14 22:32:47.762784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.049 [2024-12-14 22:32:47.785073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:27.049 22:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.esd 00:23:27.307 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.565 [2024-12-14 22:32:48.256925] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.565 TLSTESTn1 00:23:27.565 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.565 Running I/O for 10 seconds... 00:23:29.876 5238.00 IOPS, 20.46 MiB/s [2024-12-14T21:32:51.695Z] 5114.00 IOPS, 19.98 MiB/s [2024-12-14T21:32:52.631Z] 4860.67 IOPS, 18.99 MiB/s [2024-12-14T21:32:53.566Z] 4927.00 IOPS, 19.25 MiB/s [2024-12-14T21:32:54.501Z] 4871.40 IOPS, 19.03 MiB/s [2024-12-14T21:32:55.877Z] 4991.17 IOPS, 19.50 MiB/s [2024-12-14T21:32:56.813Z] 5068.14 IOPS, 19.80 MiB/s [2024-12-14T21:32:57.749Z] 5136.75 IOPS, 20.07 MiB/s [2024-12-14T21:32:58.685Z] 5186.00 IOPS, 20.26 MiB/s [2024-12-14T21:32:58.685Z] 5215.60 IOPS, 20.37 MiB/s 00:23:37.801 Latency(us) 00:23:37.801 [2024-12-14T21:32:58.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.801 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.801 Verification LBA range: start 0x0 length 0x2000 00:23:37.801 TLSTESTn1 : 10.01 5222.17 20.40 0.00 0.00 24476.41 5055.63 35951.18 00:23:37.801 [2024-12-14T21:32:58.685Z] =================================================================================================================== 00:23:37.801 [2024-12-14T21:32:58.685Z] Total : 5222.17 20.40 0.00 0.00 24476.41 5055.63 35951.18 00:23:37.801 { 00:23:37.801 "results": [ 00:23:37.801 { 00:23:37.801 "job": "TLSTESTn1", 00:23:37.802 "core_mask": "0x4", 00:23:37.802 "workload": "verify", 00:23:37.802 "status": "finished", 00:23:37.802 "verify_range": { 00:23:37.802 "start": 0, 00:23:37.802 "length": 8192 00:23:37.802 }, 00:23:37.802 "queue_depth": 128, 00:23:37.802 "io_size": 4096, 00:23:37.802 "runtime": 10.011733, 00:23:37.802 "iops": 5222.172824624868, 00:23:37.802 "mibps": 20.39911259619089, 00:23:37.802 "io_failed": 0, 00:23:37.802 "io_timeout": 0, 00:23:37.802 "avg_latency_us": 24476.409058029425, 00:23:37.802 "min_latency_us": 5055.634285714285, 00:23:37.802 "max_latency_us": 35951.177142857145 00:23:37.802 } 00:23:37.802 ], 00:23:37.802 "core_count": 1 00:23:37.802 } 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:37.802 nvmf_trace.0 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 362049 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362049 ']' 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362049 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362049 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362049' 00:23:37.802 killing process with pid 362049 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362049 00:23:37.802 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.802 00:23:37.802 Latency(us) 00:23:37.802 [2024-12-14T21:32:58.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.802 [2024-12-14T21:32:58.686Z] =================================================================================================================== 00:23:37.802 [2024-12-14T21:32:58.686Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.802 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362049 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.061 rmmod nvme_tcp 00:23:38.061 rmmod nvme_fabrics 00:23:38.061 rmmod nvme_keyring 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 362016 ']' 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 362016 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362016 ']' 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362016 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362016 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362016' 00:23:38.061 killing process with pid 362016 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362016 00:23:38.061 22:32:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362016 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.321 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.esd 00:23:40.855 00:23:40.855 real 0m20.459s 00:23:40.855 user 0m21.562s 00:23:40.855 sys 0m9.200s 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.855 ************************************ 00:23:40.855 END TEST nvmf_fips 00:23:40.855 ************************************ 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:40.855 ************************************ 00:23:40.855 START TEST nvmf_control_msg_list 00:23:40.855 ************************************ 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:40.855 * Looking for test storage... 00:23:40.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.855 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.856 --rc genhtml_branch_coverage=1 00:23:40.856 --rc genhtml_function_coverage=1 00:23:40.856 --rc genhtml_legend=1 00:23:40.856 --rc geninfo_all_blocks=1 00:23:40.856 --rc geninfo_unexecuted_blocks=1 00:23:40.856 00:23:40.856 ' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.856 --rc genhtml_branch_coverage=1 00:23:40.856 --rc genhtml_function_coverage=1 00:23:40.856 --rc genhtml_legend=1 00:23:40.856 --rc geninfo_all_blocks=1 00:23:40.856 --rc geninfo_unexecuted_blocks=1 00:23:40.856 00:23:40.856 ' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.856 --rc genhtml_branch_coverage=1 00:23:40.856 --rc genhtml_function_coverage=1 00:23:40.856 --rc genhtml_legend=1 00:23:40.856 --rc geninfo_all_blocks=1 00:23:40.856 --rc geninfo_unexecuted_blocks=1 00:23:40.856 00:23:40.856 ' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.856 --rc genhtml_branch_coverage=1 00:23:40.856 --rc genhtml_function_coverage=1 00:23:40.856 --rc genhtml_legend=1 00:23:40.856 --rc geninfo_all_blocks=1 00:23:40.856 --rc geninfo_unexecuted_blocks=1 00:23:40.856 00:23:40.856 ' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.856 22:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.131 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.132 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.132 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.132 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.132 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.132 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:23:46.392 00:23:46.392 --- 10.0.0.2 ping statistics --- 00:23:46.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.392 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:46.392 00:23:46.392 --- 10.0.0.1 ping statistics --- 00:23:46.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.392 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.392 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=367767 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 367767 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 367767 ']' 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.651 [2024-12-14 22:33:07.344468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:46.651 [2024-12-14 22:33:07.344515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.651 [2024-12-14 22:33:07.421572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.651 [2024-12-14 22:33:07.443287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.651 [2024-12-14 22:33:07.443323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.651 [2024-12-14 22:33:07.443330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.651 [2024-12-14 22:33:07.443336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.651 [2024-12-14 22:33:07.443341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.651 [2024-12-14 22:33:07.443842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.651 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.910 [2024-12-14 22:33:07.575409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.910 Malloc0 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:46.910 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.911 [2024-12-14 22:33:07.615660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=367856 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=367858 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=367860 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 367856 00:23:46.911 22:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.911 [2024-12-14 22:33:07.704328] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:46.911 [2024-12-14 22:33:07.704507] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:46.911 [2024-12-14 22:33:07.704659] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:48.287 Initializing NVMe Controllers 00:23:48.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:48.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:48.287 Initialization complete. Launching workers. 00:23:48.287 ======================================================== 00:23:48.287 Latency(us) 00:23:48.287 Device Information : IOPS MiB/s Average min max 00:23:48.287 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6581.00 25.71 151.61 128.82 347.23 00:23:48.287 ======================================================== 00:23:48.287 Total : 6581.00 25.71 151.61 128.82 347.23 00:23:48.287 00:23:48.287 Initializing NVMe Controllers 00:23:48.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:48.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:48.287 Initialization complete. Launching workers. 00:23:48.287 ======================================================== 00:23:48.287 Latency(us) 00:23:48.287 Device Information : IOPS MiB/s Average min max 00:23:48.287 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6579.96 25.70 151.64 135.81 341.14 00:23:48.287 ======================================================== 00:23:48.287 Total : 6579.96 25.70 151.64 135.81 341.14 00:23:48.287 00:23:48.287 Initializing NVMe Controllers 00:23:48.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:48.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:48.287 Initialization complete. Launching workers. 00:23:48.287 ======================================================== 00:23:48.287 Latency(us) 00:23:48.287 Device Information : IOPS MiB/s Average min max 00:23:48.287 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40993.50 40822.65 41961.85 00:23:48.287 ======================================================== 00:23:48.287 Total : 25.00 0.10 40993.50 40822.65 41961.85 00:23:48.287 00:23:48.287 22:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 367858 00:23:48.287 22:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 367860 00:23:48.287 22:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.287 rmmod nvme_tcp 00:23:48.287 rmmod nvme_fabrics 00:23:48.287 rmmod nvme_keyring 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 367767 ']' 00:23:48.287 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 367767 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 367767 ']' 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 367767 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367767 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367767' 00:23:48.288 killing process with pid 367767 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 367767 00:23:48.288 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 367767 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.547 22:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.081 00:23:51.081 real 0m10.157s 00:23:51.081 user 0m6.760s 00:23:51.081 sys 0m5.469s 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.081 ************************************ 00:23:51.081 END TEST nvmf_control_msg_list 00:23:51.081 ************************************ 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:51.081 ************************************ 00:23:51.081 START TEST nvmf_wait_for_buf 00:23:51.081 ************************************ 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:51.081 * Looking for test storage... 00:23:51.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.081 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.082 --rc genhtml_branch_coverage=1 00:23:51.082 --rc genhtml_function_coverage=1 00:23:51.082 --rc genhtml_legend=1 00:23:51.082 --rc geninfo_all_blocks=1 00:23:51.082 --rc geninfo_unexecuted_blocks=1 00:23:51.082 00:23:51.082 ' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.082 --rc genhtml_branch_coverage=1 00:23:51.082 --rc genhtml_function_coverage=1 00:23:51.082 --rc genhtml_legend=1 00:23:51.082 --rc geninfo_all_blocks=1 00:23:51.082 --rc geninfo_unexecuted_blocks=1 00:23:51.082 00:23:51.082 ' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.082 --rc genhtml_branch_coverage=1 00:23:51.082 --rc genhtml_function_coverage=1 00:23:51.082 --rc genhtml_legend=1 00:23:51.082 --rc geninfo_all_blocks=1 00:23:51.082 --rc geninfo_unexecuted_blocks=1 00:23:51.082 00:23:51.082 ' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.082 --rc genhtml_branch_coverage=1 00:23:51.082 --rc genhtml_function_coverage=1 00:23:51.082 --rc genhtml_legend=1 00:23:51.082 --rc geninfo_all_blocks=1 00:23:51.082 --rc geninfo_unexecuted_blocks=1 00:23:51.082 00:23:51.082 ' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.082 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:56.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:56.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.356 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:56.357 Found net devices under 0000:af:00.0: cvl_0_0 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:56.357 Found net devices under 0000:af:00.1: cvl_0_1 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.357 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:23:56.616 00:23:56.616 --- 10.0.0.2 ping statistics --- 00:23:56.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.616 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:23:56.616 00:23:56.616 --- 10.0.0.1 ping statistics --- 00:23:56.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.616 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:56.616 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=371556 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 371556 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 371556 ']' 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.876 [2024-12-14 22:33:17.554384] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:56.876 [2024-12-14 22:33:17.554431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.876 [2024-12-14 22:33:17.632422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.876 [2024-12-14 22:33:17.653665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.876 [2024-12-14 22:33:17.653702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.876 [2024-12-14 22:33:17.653708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.876 [2024-12-14 22:33:17.653714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.876 [2024-12-14 22:33:17.653719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.876 [2024-12-14 22:33:17.654212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.876 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:57.135 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.136 Malloc0 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.136 [2024-12-14 22:33:17.859393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.136 [2024-12-14 22:33:17.887597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.136 22:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.136 [2024-12-14 22:33:17.971978] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:59.039 Initializing NVMe Controllers 00:23:59.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:59.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:59.039 Initialization complete. Launching workers. 00:23:59.039 ======================================================== 00:23:59.039 Latency(us) 00:23:59.039 Device Information : IOPS MiB/s Average min max 00:23:59.039 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32209.00 7302.36 63856.13 00:23:59.039 ======================================================== 00:23:59.039 Total : 128.55 16.07 32209.00 7302.36 63856.13 00:23:59.039 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.039 rmmod nvme_tcp 00:23:59.039 rmmod nvme_fabrics 00:23:59.039 rmmod nvme_keyring 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 371556 ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 371556 ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371556' 00:23:59.039 killing process with pid 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 371556 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.039 22:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.575 00:24:01.575 real 0m10.455s 00:24:01.575 user 0m4.072s 00:24:01.575 sys 0m4.837s 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 ************************************ 00:24:01.575 END TEST nvmf_wait_for_buf 00:24:01.575 ************************************ 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 ************************************ 00:24:01.575 START TEST nvmf_fuzz 00:24:01.575 ************************************ 00:24:01.575 22:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:01.575 * Looking for test storage... 00:24:01.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:01.575 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:01.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.576 --rc genhtml_branch_coverage=1 00:24:01.576 --rc genhtml_function_coverage=1 00:24:01.576 --rc genhtml_legend=1 00:24:01.576 --rc geninfo_all_blocks=1 00:24:01.576 --rc geninfo_unexecuted_blocks=1 00:24:01.576 00:24:01.576 ' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:01.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.576 --rc genhtml_branch_coverage=1 00:24:01.576 --rc genhtml_function_coverage=1 00:24:01.576 --rc genhtml_legend=1 00:24:01.576 --rc geninfo_all_blocks=1 00:24:01.576 --rc geninfo_unexecuted_blocks=1 00:24:01.576 00:24:01.576 ' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:01.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.576 --rc genhtml_branch_coverage=1 00:24:01.576 --rc genhtml_function_coverage=1 00:24:01.576 --rc genhtml_legend=1 00:24:01.576 --rc geninfo_all_blocks=1 00:24:01.576 --rc geninfo_unexecuted_blocks=1 00:24:01.576 00:24:01.576 ' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:01.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.576 --rc genhtml_branch_coverage=1 00:24:01.576 --rc genhtml_function_coverage=1 00:24:01.576 --rc genhtml_legend=1 00:24:01.576 --rc geninfo_all_blocks=1 00:24:01.576 --rc geninfo_unexecuted_blocks=1 00:24:01.576 00:24:01.576 ' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.576 22:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:08.146 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.146 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:08.147 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:08.147 Found net devices under 0000:af:00.0: cvl_0_0 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:08.147 Found net devices under 0000:af:00.1: cvl_0_1 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.147 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:24:08.147 00:24:08.147 --- 10.0.0.2 ping statistics --- 00:24:08.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.147 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:24:08.147 00:24:08.147 --- 10.0.0.1 ping statistics --- 00:24:08.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.147 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=375433 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 375433 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 375433 ']' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 Malloc0 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:08.147 22:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:40.221 Fuzzing completed. Shutting down the fuzz application 00:24:40.221 00:24:40.221 Dumping successful admin opcodes: 00:24:40.221 9, 10, 00:24:40.221 Dumping successful io opcodes: 00:24:40.221 0, 9, 00:24:40.221 NS: 0x2000008eff00 I/O qp, Total commands completed: 924795, total successful commands: 5383, random_seed: 386999040 00:24:40.221 NS: 0x2000008eff00 admin qp, Total commands completed: 105904, total successful commands: 25, random_seed: 4187358080 00:24:40.221 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:40.221 Fuzzing completed. Shutting down the fuzz application 00:24:40.221 00:24:40.221 Dumping successful admin opcodes: 00:24:40.221 00:24:40.221 Dumping successful io opcodes: 00:24:40.221 00:24:40.221 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2466460502 00:24:40.221 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 2466525916 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.221 rmmod nvme_tcp 00:24:40.221 rmmod nvme_fabrics 00:24:40.221 rmmod nvme_keyring 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 375433 ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 375433 ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375433' 00:24:40.221 killing process with pid 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 375433 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.221 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.598 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.598 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:41.857 00:24:41.857 real 0m40.569s 00:24:41.857 user 0m52.676s 00:24:41.857 sys 0m16.900s 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.857 ************************************ 00:24:41.857 END TEST nvmf_fuzz 00:24:41.857 ************************************ 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.857 22:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.857 ************************************ 00:24:41.857 START TEST nvmf_multiconnection 00:24:41.857 ************************************ 00:24:41.858 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:41.858 * Looking for test storage... 00:24:41.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.858 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:41.858 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:41.858 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.117 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:42.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.118 --rc genhtml_branch_coverage=1 00:24:42.118 --rc genhtml_function_coverage=1 00:24:42.118 --rc genhtml_legend=1 00:24:42.118 --rc geninfo_all_blocks=1 00:24:42.118 --rc geninfo_unexecuted_blocks=1 00:24:42.118 00:24:42.118 ' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:42.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.118 --rc genhtml_branch_coverage=1 00:24:42.118 --rc genhtml_function_coverage=1 00:24:42.118 --rc genhtml_legend=1 00:24:42.118 --rc geninfo_all_blocks=1 00:24:42.118 --rc geninfo_unexecuted_blocks=1 00:24:42.118 00:24:42.118 ' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:42.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.118 --rc genhtml_branch_coverage=1 00:24:42.118 --rc genhtml_function_coverage=1 00:24:42.118 --rc genhtml_legend=1 00:24:42.118 --rc geninfo_all_blocks=1 00:24:42.118 --rc geninfo_unexecuted_blocks=1 00:24:42.118 00:24:42.118 ' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:42.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.118 --rc genhtml_branch_coverage=1 00:24:42.118 --rc genhtml_function_coverage=1 00:24:42.118 --rc genhtml_legend=1 00:24:42.118 --rc geninfo_all_blocks=1 00:24:42.118 --rc geninfo_unexecuted_blocks=1 00:24:42.118 00:24:42.118 ' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.118 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.390 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.390 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.390 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:47.391 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:47.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:47.391 Found net devices under 0000:af:00.0: cvl_0_0 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:47.391 Found net devices under 0000:af:00.1: cvl_0_1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.391 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:24:47.651 00:24:47.651 --- 10.0.0.2 ping statistics --- 00:24:47.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.651 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:24:47.651 00:24:47.651 --- 10.0.0.1 ping statistics --- 00:24:47.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.651 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=384013 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 384013 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 384013 ']' 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.651 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.651 [2024-12-14 22:34:08.476520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:47.651 [2024-12-14 22:34:08.476571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.910 [2024-12-14 22:34:08.553819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.910 [2024-12-14 22:34:08.578169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.910 [2024-12-14 22:34:08.578210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.910 [2024-12-14 22:34:08.578218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.910 [2024-12-14 22:34:08.578224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.910 [2024-12-14 22:34:08.578229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.910 [2024-12-14 22:34:08.579611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.910 [2024-12-14 22:34:08.579718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.910 [2024-12-14 22:34:08.579823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.910 [2024-12-14 22:34:08.579824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.910 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.910 [2024-12-14 22:34:08.724100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.911 Malloc1 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.911 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.911 [2024-12-14 22:34:08.792256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.170 Malloc2 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:48.170 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 Malloc3 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 Malloc4 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 Malloc5 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 Malloc6 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 Malloc7 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.171 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 Malloc8 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 Malloc9 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 Malloc10 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 Malloc11 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:48.431 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.432 22:34:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:49.807 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:49.807 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:49.807 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.807 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:49.807 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.709 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:53.086 22:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:53.086 22:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:53.086 22:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.086 22:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:53.086 22:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.988 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:55.924 22:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:55.924 22:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:55.924 22:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.924 22:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:55.924 22:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.456 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:59.392 22:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:59.392 22:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:59.392 22:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.392 22:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:59.392 22:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.294 22:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:02.670 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:02.670 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:02.670 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.670 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:02.670 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:04.571 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:04.571 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.572 22:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:05.948 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:05.948 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:05.948 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.948 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:05.948 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:07.850 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:07.850 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.851 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:09.227 22:34:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:09.228 22:34:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:09.228 22:34:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.228 22:34:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:09.228 22:34:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:11.131 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:11.131 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:11.131 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:11.131 22:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:11.131 22:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.131 22:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:11.390 22:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.390 22:34:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:12.766 22:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:12.766 22:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:12.766 22:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.766 22:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:12.766 22:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.666 22:34:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:16.042 22:34:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:16.042 22:34:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:16.042 22:34:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.042 22:34:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:16.042 22:34:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.944 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:19.318 22:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:19.318 22:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:19.319 22:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.319 22:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:19.319 22:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.221 22:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:23.122 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:23.122 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.122 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.122 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.122 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.024 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:25.024 [global] 00:25:25.024 thread=1 00:25:25.024 invalidate=1 00:25:25.024 rw=read 00:25:25.024 time_based=1 00:25:25.024 runtime=10 00:25:25.024 ioengine=libaio 00:25:25.024 direct=1 00:25:25.024 bs=262144 00:25:25.024 iodepth=64 00:25:25.024 norandommap=1 00:25:25.024 numjobs=1 00:25:25.024 00:25:25.024 [job0] 00:25:25.024 filename=/dev/nvme0n1 00:25:25.024 [job1] 00:25:25.024 filename=/dev/nvme10n1 00:25:25.024 [job2] 00:25:25.024 filename=/dev/nvme1n1 00:25:25.024 [job3] 00:25:25.024 filename=/dev/nvme2n1 00:25:25.024 [job4] 00:25:25.024 filename=/dev/nvme3n1 00:25:25.024 [job5] 00:25:25.024 filename=/dev/nvme4n1 00:25:25.024 [job6] 00:25:25.024 filename=/dev/nvme5n1 00:25:25.024 [job7] 00:25:25.024 filename=/dev/nvme6n1 00:25:25.024 [job8] 00:25:25.024 filename=/dev/nvme7n1 00:25:25.024 [job9] 00:25:25.024 filename=/dev/nvme8n1 00:25:25.024 [job10] 00:25:25.024 filename=/dev/nvme9n1 00:25:25.024 Could not set queue depth (nvme0n1) 00:25:25.024 Could not set queue depth (nvme10n1) 00:25:25.024 Could not set queue depth (nvme1n1) 00:25:25.024 Could not set queue depth (nvme2n1) 00:25:25.024 Could not set queue depth (nvme3n1) 00:25:25.024 Could not set queue depth (nvme4n1) 00:25:25.024 Could not set queue depth (nvme5n1) 00:25:25.024 Could not set queue depth (nvme6n1) 00:25:25.024 Could not set queue depth (nvme7n1) 00:25:25.024 Could not set queue depth (nvme8n1) 00:25:25.024 Could not set queue depth (nvme9n1) 00:25:25.283 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.283 fio-3.35 00:25:25.283 Starting 11 threads 00:25:37.493 00:25:37.493 job0: (groupid=0, jobs=1): err= 0: pid=390319: Sat Dec 14 22:34:56 2024 00:25:37.493 read: IOPS=148, BW=37.2MiB/s (39.0MB/s)(375MiB/10070msec) 00:25:37.493 slat (usec): min=17, max=205119, avg=4176.81, stdev=19331.76 00:25:37.493 clat (usec): min=584, max=847065, avg=425572.30, stdev=229919.41 00:25:37.493 lat (usec): min=608, max=847122, avg=429749.11, stdev=232807.56 00:25:37.493 clat percentiles (msec): 00:25:37.493 | 1.00th=[ 3], 5.00th=[ 26], 10.00th=[ 72], 20.00th=[ 161], 00:25:37.493 | 30.00th=[ 292], 40.00th=[ 401], 50.00th=[ 460], 60.00th=[ 542], 00:25:37.493 | 70.00th=[ 592], 80.00th=[ 651], 90.00th=[ 701], 95.00th=[ 726], 00:25:37.493 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 818], 99.95th=[ 844], 00:25:37.493 | 99.99th=[ 844] 00:25:37.493 bw ( KiB/s): min=17408, max=68608, per=4.08%, avg=36710.40, stdev=14568.08, samples=20 00:25:37.493 iops : min= 68, max= 268, avg=143.40, stdev=56.91, samples=20 00:25:37.493 lat (usec) : 750=0.13%, 1000=0.53% 00:25:37.493 lat (msec) : 2=0.27%, 4=0.27%, 20=1.34%, 50=5.14%, 100=5.81% 00:25:37.493 lat (msec) : 250=13.82%, 500=29.17%, 750=40.99%, 1000=2.54% 00:25:37.493 cpu : usr=0.04%, sys=0.72%, ctx=382, majf=0, minf=4097 00:25:37.493 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:25:37.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.493 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.493 issued rwts: total=1498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.493 job1: (groupid=0, jobs=1): err= 0: pid=390320: Sat Dec 14 22:34:56 2024 00:25:37.493 read: IOPS=136, BW=34.2MiB/s (35.9MB/s)(346MiB/10105msec) 00:25:37.493 slat (usec): min=9, max=218696, avg=4944.98, stdev=20295.95 00:25:37.493 clat (msec): min=25, max=911, avg=462.00, stdev=199.04 00:25:37.493 lat (msec): min=25, max=911, avg=466.94, stdev=202.00 00:25:37.493 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 41], 5.00th=[ 101], 10.00th=[ 205], 20.00th=[ 288], 00:25:37.494 | 30.00th=[ 342], 40.00th=[ 405], 50.00th=[ 460], 60.00th=[ 523], 00:25:37.494 | 70.00th=[ 600], 80.00th=[ 642], 90.00th=[ 709], 95.00th=[ 785], 00:25:37.494 | 99.00th=[ 852], 99.50th=[ 852], 99.90th=[ 885], 99.95th=[ 911], 00:25:37.494 | 99.99th=[ 911] 00:25:37.494 bw ( KiB/s): min=12288, max=94720, per=3.75%, avg=33795.00, stdev=17630.23, samples=20 00:25:37.494 iops : min= 48, max= 370, avg=132.00, stdev=68.87, samples=20 00:25:37.494 lat (msec) : 50=1.30%, 100=3.69%, 250=10.77%, 500=41.94%, 750=33.26% 00:25:37.494 lat (msec) : 1000=9.04% 00:25:37.494 cpu : usr=0.02%, sys=0.70%, ctx=297, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job2: (groupid=0, jobs=1): err= 0: pid=390321: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=252, BW=63.1MiB/s (66.1MB/s)(637MiB/10106msec) 00:25:37.494 slat (usec): min=10, max=196604, avg=3250.69, stdev=14931.11 00:25:37.494 clat (usec): min=1393, max=904542, avg=250274.48, stdev=232648.11 00:25:37.494 lat (usec): min=1433, max=904588, avg=253525.18, stdev=236094.89 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 37], 00:25:37.494 | 30.00th=[ 103], 40.00th=[ 124], 50.00th=[ 142], 60.00th=[ 228], 00:25:37.494 | 70.00th=[ 363], 80.00th=[ 460], 90.00th=[ 659], 95.00th=[ 709], 00:25:37.494 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 852], 99.95th=[ 902], 00:25:37.494 | 99.99th=[ 902] 00:25:37.494 bw ( KiB/s): min=19968, max=195584, per=7.06%, avg=63616.00, stdev=56891.05, samples=20 00:25:37.494 iops : min= 78, max= 764, avg=248.50, stdev=222.23, samples=20 00:25:37.494 lat (msec) : 2=0.20%, 4=2.75%, 10=8.59%, 20=3.02%, 50=7.92% 00:25:37.494 lat (msec) : 100=6.75%, 250=32.88%, 500=19.69%, 750=15.57%, 1000=2.63% 00:25:37.494 cpu : usr=0.12%, sys=0.87%, ctx=789, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=2549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job3: (groupid=0, jobs=1): err= 0: pid=390322: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=184, BW=46.2MiB/s (48.4MB/s)(464MiB/10043msec) 00:25:37.494 slat (usec): min=12, max=409913, avg=5220.98, stdev=21912.70 00:25:37.494 clat (msec): min=17, max=988, avg=340.88, stdev=231.55 00:25:37.494 lat (msec): min=17, max=988, avg=346.11, stdev=234.81 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 73], 20.00th=[ 92], 00:25:37.494 | 30.00th=[ 138], 40.00th=[ 205], 50.00th=[ 338], 60.00th=[ 426], 00:25:37.494 | 70.00th=[ 481], 80.00th=[ 558], 90.00th=[ 651], 95.00th=[ 718], 00:25:37.494 | 99.00th=[ 927], 99.50th=[ 927], 99.90th=[ 978], 99.95th=[ 986], 00:25:37.494 | 99.99th=[ 986] 00:25:37.494 bw ( KiB/s): min= 5120, max=195072, per=5.09%, avg=45849.60, stdev=40459.59, samples=20 00:25:37.494 iops : min= 20, max= 762, avg=179.10, stdev=158.05, samples=20 00:25:37.494 lat (msec) : 20=0.22%, 50=1.73%, 100=19.46%, 250=22.96%, 500=28.09% 00:25:37.494 lat (msec) : 750=23.13%, 1000=4.42% 00:25:37.494 cpu : usr=0.07%, sys=0.77%, ctx=279, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job4: (groupid=0, jobs=1): err= 0: pid=390323: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=452, BW=113MiB/s (119MB/s)(1142MiB/10086msec) 00:25:37.494 slat (usec): min=20, max=492867, avg=1718.97, stdev=11877.51 00:25:37.494 clat (usec): min=1342, max=1087.9k, avg=139474.46, stdev=180080.09 00:25:37.494 lat (usec): min=1483, max=1087.9k, avg=141193.42, stdev=182413.49 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 31], 00:25:37.494 | 30.00th=[ 40], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:25:37.494 | 70.00th=[ 108], 80.00th=[ 148], 90.00th=[ 456], 95.00th=[ 558], 00:25:37.494 | 99.00th=[ 726], 99.50th=[ 978], 99.90th=[ 1020], 99.95th=[ 1020], 00:25:37.494 | 99.99th=[ 1083] 00:25:37.494 bw ( KiB/s): min=11264, max=301568, per=12.80%, avg=115276.80, stdev=95205.52, samples=20 00:25:37.494 iops : min= 44, max= 1178, avg=450.30, stdev=371.90, samples=20 00:25:37.494 lat (msec) : 2=0.07%, 4=0.72%, 10=1.53%, 20=3.53%, 50=30.86% 00:25:37.494 lat (msec) : 100=31.58%, 250=13.62%, 500=10.01%, 750=7.16%, 1000=0.70% 00:25:37.494 lat (msec) : 2000=0.22% 00:25:37.494 cpu : usr=0.24%, sys=2.00%, ctx=1327, majf=0, minf=3722 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=4566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job5: (groupid=0, jobs=1): err= 0: pid=390345: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=488, BW=122MiB/s (128MB/s)(1233MiB/10102msec) 00:25:37.494 slat (usec): min=9, max=216202, avg=1834.45, stdev=8316.73 00:25:37.494 clat (msec): min=2, max=680, avg=129.09, stdev=114.19 00:25:37.494 lat (msec): min=2, max=680, avg=130.92, stdev=115.45 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 36], 00:25:37.494 | 30.00th=[ 51], 40.00th=[ 79], 50.00th=[ 97], 60.00th=[ 129], 00:25:37.494 | 70.00th=[ 159], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 397], 00:25:37.494 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 684], 99.95th=[ 684], 00:25:37.494 | 99.99th=[ 684] 00:25:37.494 bw ( KiB/s): min=26624, max=423424, per=13.84%, avg=124646.40, stdev=107279.45, samples=20 00:25:37.494 iops : min= 104, max= 1654, avg=486.90, stdev=419.06, samples=20 00:25:37.494 lat (msec) : 4=0.04%, 10=0.12%, 20=0.77%, 50=28.87%, 100=21.70% 00:25:37.494 lat (msec) : 250=39.03%, 500=7.54%, 750=1.93% 00:25:37.494 cpu : usr=0.17%, sys=1.89%, ctx=736, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=4932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job6: (groupid=0, jobs=1): err= 0: pid=390356: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=329, BW=82.5MiB/s (86.5MB/s)(834MiB/10109msec) 00:25:37.494 slat (usec): min=15, max=276224, avg=2744.00, stdev=11065.61 00:25:37.494 clat (usec): min=1135, max=744834, avg=191095.14, stdev=124743.48 00:25:37.494 lat (usec): min=1165, max=764808, avg=193839.14, stdev=126282.15 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 13], 5.00th=[ 51], 10.00th=[ 75], 20.00th=[ 93], 00:25:37.494 | 30.00th=[ 120], 40.00th=[ 142], 50.00th=[ 159], 60.00th=[ 178], 00:25:37.494 | 70.00th=[ 213], 80.00th=[ 266], 90.00th=[ 384], 95.00th=[ 447], 00:25:37.494 | 99.00th=[ 634], 99.50th=[ 651], 99.90th=[ 743], 99.95th=[ 743], 00:25:37.494 | 99.99th=[ 743] 00:25:37.494 bw ( KiB/s): min=24064, max=177664, per=9.30%, avg=83712.00, stdev=43856.34, samples=20 00:25:37.494 iops : min= 94, max= 694, avg=327.00, stdev=171.31, samples=20 00:25:37.494 lat (msec) : 2=0.03%, 4=0.48%, 10=0.27%, 20=0.75%, 50=3.51% 00:25:37.494 lat (msec) : 100=18.18%, 250=54.02%, 500=20.25%, 750=2.52% 00:25:37.494 cpu : usr=0.07%, sys=1.46%, ctx=560, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=3334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job7: (groupid=0, jobs=1): err= 0: pid=390375: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=530, BW=133MiB/s (139MB/s)(1332MiB/10038msec) 00:25:37.494 slat (usec): min=9, max=454604, avg=1849.81, stdev=12878.10 00:25:37.494 clat (msec): min=19, max=903, avg=118.58, stdev=179.45 00:25:37.494 lat (msec): min=19, max=908, avg=120.43, stdev=182.13 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 30], 00:25:37.494 | 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 42], 00:25:37.494 | 70.00th=[ 72], 80.00th=[ 118], 90.00th=[ 464], 95.00th=[ 592], 00:25:37.494 | 99.00th=[ 709], 99.50th=[ 844], 99.90th=[ 902], 99.95th=[ 902], 00:25:37.494 | 99.99th=[ 902] 00:25:37.494 bw ( KiB/s): min=18468, max=558592, per=14.96%, avg=134760.20, stdev=179719.00, samples=20 00:25:37.494 iops : min= 72, max= 2182, avg=526.40, stdev=702.03, samples=20 00:25:37.494 lat (msec) : 20=0.21%, 50=62.37%, 100=13.55%, 250=9.31%, 500=5.82% 00:25:37.494 lat (msec) : 750=7.92%, 1000=0.83% 00:25:37.494 cpu : usr=0.19%, sys=2.10%, ctx=704, majf=0, minf=4097 00:25:37.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:37.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.494 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.494 job8: (groupid=0, jobs=1): err= 0: pid=390427: Sat Dec 14 22:34:56 2024 00:25:37.494 read: IOPS=257, BW=64.5MiB/s (67.6MB/s)(649MiB/10067msec) 00:25:37.494 slat (usec): min=12, max=418095, avg=2573.62, stdev=14119.03 00:25:37.494 clat (usec): min=1850, max=820667, avg=245339.60, stdev=201274.65 00:25:37.494 lat (usec): min=1895, max=875542, avg=247913.22, stdev=203974.87 00:25:37.494 clat percentiles (msec): 00:25:37.494 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 19], 20.00th=[ 43], 00:25:37.494 | 30.00th=[ 89], 40.00th=[ 180], 50.00th=[ 213], 60.00th=[ 241], 00:25:37.494 | 70.00th=[ 321], 80.00th=[ 422], 90.00th=[ 558], 95.00th=[ 651], 00:25:37.494 | 99.00th=[ 768], 99.50th=[ 785], 99.90th=[ 818], 99.95th=[ 818], 00:25:37.494 | 99.99th=[ 818] 00:25:37.495 bw ( KiB/s): min=25088, max=263680, per=7.20%, avg=64819.20, stdev=53866.12, samples=20 00:25:37.495 iops : min= 98, max= 1030, avg=253.20, stdev=210.41, samples=20 00:25:37.495 lat (msec) : 2=0.08%, 4=2.31%, 10=3.78%, 20=4.82%, 50=10.82% 00:25:37.495 lat (msec) : 100=9.71%, 250=30.47%, 500=23.73%, 750=13.14%, 1000=1.16% 00:25:37.495 cpu : usr=0.13%, sys=0.88%, ctx=801, majf=0, minf=4098 00:25:37.495 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:37.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.495 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.495 job9: (groupid=0, jobs=1): err= 0: pid=390441: Sat Dec 14 22:34:56 2024 00:25:37.495 read: IOPS=427, BW=107MiB/s (112MB/s)(1075MiB/10046msec) 00:25:37.495 slat (usec): min=15, max=338961, avg=1811.29, stdev=13707.02 00:25:37.495 clat (msec): min=15, max=1136, avg=147.60, stdev=236.66 00:25:37.495 lat (msec): min=16, max=1308, avg=149.41, stdev=239.48 00:25:37.495 clat percentiles (msec): 00:25:37.495 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 30], 00:25:37.495 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 35], 00:25:37.495 | 70.00th=[ 62], 80.00th=[ 207], 90.00th=[ 567], 95.00th=[ 709], 00:25:37.495 | 99.00th=[ 1003], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1133], 00:25:37.495 | 99.99th=[ 1133] 00:25:37.495 bw ( KiB/s): min=13312, max=517632, per=12.04%, avg=108390.40, stdev=170749.55, samples=20 00:25:37.495 iops : min= 52, max= 2022, avg=423.40, stdev=666.99, samples=20 00:25:37.495 lat (msec) : 20=0.14%, 50=67.29%, 100=8.31%, 250=5.51%, 500=5.49% 00:25:37.495 lat (msec) : 750=9.38%, 1000=2.44%, 2000=1.44% 00:25:37.495 cpu : usr=0.20%, sys=1.68%, ctx=593, majf=0, minf=4097 00:25:37.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:37.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.495 issued rwts: total=4298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.495 job10: (groupid=0, jobs=1): err= 0: pid=390450: Sat Dec 14 22:34:56 2024 00:25:37.495 read: IOPS=318, BW=79.7MiB/s (83.5MB/s)(805MiB/10107msec) 00:25:37.495 slat (usec): min=15, max=188940, avg=2502.30, stdev=10436.72 00:25:37.495 clat (msec): min=4, max=704, avg=198.10, stdev=146.23 00:25:37.495 lat (msec): min=4, max=835, avg=200.60, stdev=147.36 00:25:37.495 clat percentiles (msec): 00:25:37.495 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 38], 20.00th=[ 85], 00:25:37.495 | 30.00th=[ 108], 40.00th=[ 146], 50.00th=[ 163], 60.00th=[ 188], 00:25:37.495 | 70.00th=[ 222], 80.00th=[ 288], 90.00th=[ 426], 95.00th=[ 510], 00:25:37.495 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 701], 99.95th=[ 709], 00:25:37.495 | 99.99th=[ 709] 00:25:37.495 bw ( KiB/s): min=24064, max=220160, per=8.98%, avg=80844.80, stdev=48323.09, samples=20 00:25:37.495 iops : min= 94, max= 860, avg=315.80, stdev=188.76, samples=20 00:25:37.495 lat (msec) : 10=0.99%, 20=4.66%, 50=5.84%, 100=13.91%, 250=48.71% 00:25:37.495 lat (msec) : 500=20.09%, 750=5.81% 00:25:37.495 cpu : usr=0.09%, sys=1.29%, ctx=527, majf=0, minf=4097 00:25:37.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:37.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.495 issued rwts: total=3221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.495 00:25:37.495 Run status group 0 (all jobs): 00:25:37.495 READ: bw=879MiB/s (922MB/s), 34.2MiB/s-133MiB/s (35.9MB/s-139MB/s), io=8890MiB (9322MB), run=10038-10109msec 00:25:37.495 00:25:37.495 Disk stats (read/write): 00:25:37.495 nvme0n1: ios=2813/0, merge=0/0, ticks=1229634/0, in_queue=1229634, util=94.81% 00:25:37.495 nvme10n1: ios=2616/0, merge=0/0, ticks=1221418/0, in_queue=1221418, util=95.24% 00:25:37.495 nvme1n1: ios=4928/0, merge=0/0, ticks=1210944/0, in_queue=1210944, util=95.87% 00:25:37.495 nvme2n1: ios=3528/0, merge=0/0, ticks=1235259/0, in_queue=1235259, util=96.26% 00:25:37.495 nvme3n1: ios=8875/0, merge=0/0, ticks=1238345/0, in_queue=1238345, util=96.48% 00:25:37.495 nvme4n1: ios=9720/0, merge=0/0, ticks=1219896/0, in_queue=1219896, util=97.31% 00:25:37.495 nvme5n1: ios=6522/0, merge=0/0, ticks=1224799/0, in_queue=1224799, util=97.69% 00:25:37.495 nvme6n1: ios=10441/0, merge=0/0, ticks=1226852/0, in_queue=1226852, util=97.99% 00:25:37.495 nvme7n1: ios=5023/0, merge=0/0, ticks=1228711/0, in_queue=1228711, util=98.91% 00:25:37.495 nvme8n1: ios=8423/0, merge=0/0, ticks=1223124/0, in_queue=1223124, util=99.13% 00:25:37.495 nvme9n1: ios=6289/0, merge=0/0, ticks=1220461/0, in_queue=1220461, util=99.25% 00:25:37.495 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:37.495 [global] 00:25:37.495 thread=1 00:25:37.495 invalidate=1 00:25:37.495 rw=randwrite 00:25:37.495 time_based=1 00:25:37.495 runtime=10 00:25:37.495 ioengine=libaio 00:25:37.495 direct=1 00:25:37.495 bs=262144 00:25:37.495 iodepth=64 00:25:37.495 norandommap=1 00:25:37.495 numjobs=1 00:25:37.495 00:25:37.495 [job0] 00:25:37.495 filename=/dev/nvme0n1 00:25:37.495 [job1] 00:25:37.495 filename=/dev/nvme10n1 00:25:37.495 [job2] 00:25:37.495 filename=/dev/nvme1n1 00:25:37.495 [job3] 00:25:37.495 filename=/dev/nvme2n1 00:25:37.495 [job4] 00:25:37.495 filename=/dev/nvme3n1 00:25:37.495 [job5] 00:25:37.495 filename=/dev/nvme4n1 00:25:37.495 [job6] 00:25:37.495 filename=/dev/nvme5n1 00:25:37.495 [job7] 00:25:37.495 filename=/dev/nvme6n1 00:25:37.495 [job8] 00:25:37.495 filename=/dev/nvme7n1 00:25:37.495 [job9] 00:25:37.495 filename=/dev/nvme8n1 00:25:37.495 [job10] 00:25:37.495 filename=/dev/nvme9n1 00:25:37.495 Could not set queue depth (nvme0n1) 00:25:37.495 Could not set queue depth (nvme10n1) 00:25:37.495 Could not set queue depth (nvme1n1) 00:25:37.495 Could not set queue depth (nvme2n1) 00:25:37.495 Could not set queue depth (nvme3n1) 00:25:37.495 Could not set queue depth (nvme4n1) 00:25:37.495 Could not set queue depth (nvme5n1) 00:25:37.495 Could not set queue depth (nvme6n1) 00:25:37.495 Could not set queue depth (nvme7n1) 00:25:37.495 Could not set queue depth (nvme8n1) 00:25:37.495 Could not set queue depth (nvme9n1) 00:25:37.495 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.495 fio-3.35 00:25:37.495 Starting 11 threads 00:25:47.472 00:25:47.472 job0: (groupid=0, jobs=1): err= 0: pid=391484: Sat Dec 14 22:35:07 2024 00:25:47.472 write: IOPS=509, BW=127MiB/s (134MB/s)(1288MiB/10107msec); 0 zone resets 00:25:47.472 slat (usec): min=19, max=124614, avg=1123.72, stdev=5075.22 00:25:47.472 clat (usec): min=757, max=665645, avg=124299.57, stdev=125167.41 00:25:47.472 lat (usec): min=794, max=665710, avg=125423.29, stdev=126237.38 00:25:47.472 clat percentiles (msec): 00:25:47.472 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 19], 00:25:47.472 | 30.00th=[ 36], 40.00th=[ 46], 50.00th=[ 66], 60.00th=[ 113], 00:25:47.472 | 70.00th=[ 199], 80.00th=[ 245], 90.00th=[ 288], 95.00th=[ 342], 00:25:47.472 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 625], 99.95th=[ 642], 00:25:47.472 | 99.99th=[ 667] 00:25:47.472 bw ( KiB/s): min=28672, max=397824, per=11.98%, avg=130304.00, stdev=96091.38, samples=20 00:25:47.472 iops : min= 112, max= 1554, avg=509.00, stdev=375.36, samples=20 00:25:47.472 lat (usec) : 1000=0.12% 00:25:47.472 lat (msec) : 2=0.17%, 4=2.10%, 10=9.28%, 20=9.22%, 50=26.24% 00:25:47.472 lat (msec) : 100=11.10%, 250=23.50%, 500=16.67%, 750=1.61% 00:25:47.472 cpu : usr=1.01%, sys=1.62%, ctx=3331, majf=0, minf=1 00:25:47.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:47.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.472 issued rwts: total=0,5153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.472 job1: (groupid=0, jobs=1): err= 0: pid=391509: Sat Dec 14 22:35:07 2024 00:25:47.472 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(942MiB/10154msec); 0 zone resets 00:25:47.472 slat (usec): min=17, max=69249, avg=1467.76, stdev=5072.38 00:25:47.472 clat (usec): min=706, max=774477, avg=170880.90, stdev=130888.98 00:25:47.472 lat (usec): min=748, max=774541, avg=172348.66, stdev=131946.24 00:25:47.472 clat percentiles (usec): 00:25:47.472 | 1.00th=[ 1942], 5.00th=[ 7111], 10.00th=[ 17695], 20.00th=[ 32900], 00:25:47.472 | 30.00th=[ 50070], 40.00th=[102237], 50.00th=[196084], 60.00th=[223347], 00:25:47.472 | 70.00th=[242222], 80.00th=[274727], 90.00th=[333448], 95.00th=[362808], 00:25:47.472 | 99.00th=[534774], 99.50th=[591397], 99.90th=[750781], 99.95th=[759170], 00:25:47.472 | 99.99th=[775947] 00:25:47.472 bw ( KiB/s): min=50176, max=285184, per=8.72%, avg=94860.05, stdev=56148.35, samples=20 00:25:47.472 iops : min= 196, max= 1114, avg=370.50, stdev=219.31, samples=20 00:25:47.472 lat (usec) : 750=0.05%, 1000=0.11% 00:25:47.472 lat (msec) : 2=0.90%, 4=1.70%, 10=3.63%, 20=5.33%, 50=18.20% 00:25:47.472 lat (msec) : 100=9.68%, 250=33.99%, 500=24.91%, 750=1.43%, 1000=0.05% 00:25:47.472 cpu : usr=0.80%, sys=1.40%, ctx=2653, majf=0, minf=1 00:25:47.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:47.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.472 issued rwts: total=0,3769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.472 job2: (groupid=0, jobs=1): err= 0: pid=391525: Sat Dec 14 22:35:07 2024 00:25:47.472 write: IOPS=301, BW=75.4MiB/s (79.1MB/s)(764MiB/10129msec); 0 zone resets 00:25:47.472 slat (usec): min=20, max=290932, avg=2295.24, stdev=8913.88 00:25:47.472 clat (usec): min=877, max=769907, avg=209702.29, stdev=155609.48 00:25:47.472 lat (usec): min=1033, max=770514, avg=211997.54, stdev=157353.93 00:25:47.472 clat percentiles (msec): 00:25:47.472 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 45], 00:25:47.472 | 30.00th=[ 104], 40.00th=[ 144], 50.00th=[ 186], 60.00th=[ 259], 00:25:47.472 | 70.00th=[ 309], 80.00th=[ 347], 90.00th=[ 401], 95.00th=[ 443], 00:25:47.472 | 99.00th=[ 676], 99.50th=[ 760], 99.90th=[ 768], 99.95th=[ 768], 00:25:47.472 | 99.99th=[ 768] 00:25:47.472 bw ( KiB/s): min=38912, max=187392, per=7.04%, avg=76595.20, stdev=39267.22, samples=20 00:25:47.472 iops : min= 152, max= 732, avg=299.20, stdev=153.39, samples=20 00:25:47.472 lat (usec) : 1000=0.07% 00:25:47.472 lat (msec) : 2=0.29%, 4=1.15%, 10=3.08%, 20=9.23%, 50=6.78% 00:25:47.472 lat (msec) : 100=8.41%, 250=29.49%, 500=37.91%, 750=3.04%, 1000=0.56% 00:25:47.472 cpu : usr=0.61%, sys=1.10%, ctx=1778, majf=0, minf=1 00:25:47.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:25:47.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.472 issued rwts: total=0,3055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.472 job3: (groupid=0, jobs=1): err= 0: pid=391535: Sat Dec 14 22:35:07 2024 00:25:47.472 write: IOPS=307, BW=76.8MiB/s (80.5MB/s)(772MiB/10058msec); 0 zone resets 00:25:47.472 slat (usec): min=22, max=78938, avg=2186.18, stdev=6622.27 00:25:47.472 clat (usec): min=634, max=801729, avg=206142.72, stdev=153049.14 00:25:47.472 lat (usec): min=671, max=811196, avg=208328.90, stdev=154997.08 00:25:47.473 clat percentiles (usec): 00:25:47.473 | 1.00th=[ 1532], 5.00th=[ 3851], 10.00th=[ 16057], 20.00th=[ 55313], 00:25:47.473 | 30.00th=[104334], 40.00th=[160433], 50.00th=[210764], 60.00th=[229639], 00:25:47.473 | 70.00th=[270533], 80.00th=[320865], 90.00th=[362808], 95.00th=[492831], 00:25:47.473 | 99.00th=[742392], 99.50th=[767558], 99.90th=[792724], 99.95th=[792724], 00:25:47.473 | 99.99th=[801113] 00:25:47.473 bw ( KiB/s): min=28672, max=157184, per=7.12%, avg=77465.60, stdev=31647.84, samples=20 00:25:47.473 iops : min= 112, max= 614, avg=302.60, stdev=123.62, samples=20 00:25:47.473 lat (usec) : 750=0.13%, 1000=0.13% 00:25:47.473 lat (msec) : 2=1.26%, 4=3.63%, 10=3.20%, 20=2.88%, 50=7.48% 00:25:47.473 lat (msec) : 100=10.52%, 250=35.64%, 500=30.20%, 750=4.05%, 1000=0.87% 00:25:47.473 cpu : usr=0.77%, sys=0.90%, ctx=1826, majf=0, minf=1 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,3089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job4: (groupid=0, jobs=1): err= 0: pid=391541: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=341, BW=85.4MiB/s (89.6MB/s)(865MiB/10128msec); 0 zone resets 00:25:47.473 slat (usec): min=20, max=380004, avg=2265.76, stdev=8531.03 00:25:47.473 clat (usec): min=805, max=753484, avg=184951.86, stdev=113590.27 00:25:47.473 lat (usec): min=1286, max=753556, avg=187217.62, stdev=114690.12 00:25:47.473 clat percentiles (msec): 00:25:47.473 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 66], 20.00th=[ 101], 00:25:47.473 | 30.00th=[ 120], 40.00th=[ 150], 50.00th=[ 178], 60.00th=[ 209], 00:25:47.473 | 70.00th=[ 224], 80.00th=[ 249], 90.00th=[ 300], 95.00th=[ 368], 00:25:47.473 | 99.00th=[ 659], 99.50th=[ 718], 99.90th=[ 743], 99.95th=[ 751], 00:25:47.473 | 99.99th=[ 751] 00:25:47.473 bw ( KiB/s): min=35328, max=165376, per=8.00%, avg=86996.55, stdev=29190.63, samples=20 00:25:47.473 iops : min= 138, max= 646, avg=339.80, stdev=114.04, samples=20 00:25:47.473 lat (usec) : 1000=0.03% 00:25:47.473 lat (msec) : 2=0.12%, 4=0.81%, 10=4.57%, 20=1.01%, 50=1.94% 00:25:47.473 lat (msec) : 100=11.67%, 250=60.18%, 500=17.28%, 750=2.31%, 1000=0.09% 00:25:47.473 cpu : usr=0.97%, sys=1.07%, ctx=1634, majf=0, minf=1 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,3461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job5: (groupid=0, jobs=1): err= 0: pid=391560: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=406, BW=102MiB/s (107MB/s)(1025MiB/10077msec); 0 zone resets 00:25:47.473 slat (usec): min=20, max=173690, avg=1593.18, stdev=5627.95 00:25:47.473 clat (usec): min=995, max=694299, avg=155616.03, stdev=118312.20 00:25:47.473 lat (usec): min=1049, max=694357, avg=157209.21, stdev=119331.73 00:25:47.473 clat percentiles (msec): 00:25:47.473 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 54], 00:25:47.473 | 30.00th=[ 80], 40.00th=[ 104], 50.00th=[ 134], 60.00th=[ 159], 00:25:47.473 | 70.00th=[ 203], 80.00th=[ 264], 90.00th=[ 321], 95.00th=[ 351], 00:25:47.473 | 99.00th=[ 558], 99.50th=[ 600], 99.90th=[ 659], 99.95th=[ 684], 00:25:47.473 | 99.99th=[ 693] 00:25:47.473 bw ( KiB/s): min=47104, max=243200, per=9.50%, avg=103372.80, stdev=52043.22, samples=20 00:25:47.473 iops : min= 184, max= 950, avg=403.80, stdev=203.29, samples=20 00:25:47.473 lat (usec) : 1000=0.02% 00:25:47.473 lat (msec) : 2=0.22%, 4=1.32%, 10=5.56%, 20=5.34%, 50=6.95% 00:25:47.473 lat (msec) : 100=19.70%, 250=38.87%, 500=20.51%, 750=1.51% 00:25:47.473 cpu : usr=1.02%, sys=1.36%, ctx=2372, majf=0, minf=1 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,4101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job6: (groupid=0, jobs=1): err= 0: pid=391569: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=315, BW=79.0MiB/s (82.8MB/s)(802MiB/10153msec); 0 zone resets 00:25:47.473 slat (usec): min=26, max=318161, avg=2267.21, stdev=8032.27 00:25:47.473 clat (msec): min=4, max=819, avg=200.09, stdev=119.55 00:25:47.473 lat (msec): min=4, max=819, avg=202.35, stdev=120.66 00:25:47.473 clat percentiles (msec): 00:25:47.473 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 66], 20.00th=[ 100], 00:25:47.473 | 30.00th=[ 140], 40.00th=[ 169], 50.00th=[ 201], 60.00th=[ 218], 00:25:47.473 | 70.00th=[ 232], 80.00th=[ 259], 90.00th=[ 347], 95.00th=[ 430], 00:25:47.473 | 99.00th=[ 592], 99.50th=[ 709], 99.90th=[ 802], 99.95th=[ 818], 00:25:47.473 | 99.99th=[ 818] 00:25:47.473 bw ( KiB/s): min=26624, max=181760, per=7.40%, avg=80518.65, stdev=33578.95, samples=20 00:25:47.473 iops : min= 104, max= 710, avg=314.50, stdev=131.18, samples=20 00:25:47.473 lat (msec) : 10=0.28%, 20=1.81%, 50=5.83%, 100=12.78%, 250=56.05% 00:25:47.473 lat (msec) : 500=20.36%, 750=2.56%, 1000=0.34% 00:25:47.473 cpu : usr=0.80%, sys=1.02%, ctx=1675, majf=0, minf=1 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,3208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job7: (groupid=0, jobs=1): err= 0: pid=391577: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=481, BW=120MiB/s (126MB/s)(1223MiB/10154msec); 0 zone resets 00:25:47.473 slat (usec): min=18, max=145107, avg=1348.16, stdev=4817.64 00:25:47.473 clat (usec): min=706, max=701991, avg=131369.51, stdev=120455.97 00:25:47.473 lat (usec): min=736, max=702043, avg=132717.67, stdev=121635.77 00:25:47.473 clat percentiles (usec): 00:25:47.473 | 1.00th=[ 1352], 5.00th=[ 5145], 10.00th=[ 15139], 20.00th=[ 44303], 00:25:47.473 | 30.00th=[ 58983], 40.00th=[ 73925], 50.00th=[ 85459], 60.00th=[116917], 00:25:47.473 | 70.00th=[160433], 80.00th=[196084], 90.00th=[304088], 95.00th=[404751], 00:25:47.473 | 99.00th=[541066], 99.50th=[599786], 99.90th=[658506], 99.95th=[675283], 00:25:47.473 | 99.99th=[700449] 00:25:47.473 bw ( KiB/s): min=35328, max=238080, per=11.36%, avg=123622.40, stdev=64921.85, samples=20 00:25:47.473 iops : min= 138, max= 930, avg=482.90, stdev=253.60, samples=20 00:25:47.473 lat (usec) : 750=0.16%, 1000=0.41% 00:25:47.473 lat (msec) : 2=1.17%, 4=2.49%, 10=3.25%, 20=4.42%, 50=12.16% 00:25:47.473 lat (msec) : 100=30.87%, 250=31.50%, 500=11.59%, 750=1.98% 00:25:47.473 cpu : usr=1.10%, sys=1.38%, ctx=2762, majf=0, minf=1 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,4892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job8: (groupid=0, jobs=1): err= 0: pid=391599: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=352, BW=88.2MiB/s (92.5MB/s)(894MiB/10139msec); 0 zone resets 00:25:47.473 slat (usec): min=25, max=152352, avg=1765.74, stdev=6581.94 00:25:47.473 clat (usec): min=835, max=765488, avg=179556.49, stdev=148010.24 00:25:47.473 lat (usec): min=875, max=772622, avg=181322.23, stdev=149700.07 00:25:47.473 clat percentiles (msec): 00:25:47.473 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 32], 00:25:47.473 | 30.00th=[ 56], 40.00th=[ 108], 50.00th=[ 148], 60.00th=[ 197], 00:25:47.473 | 70.00th=[ 257], 80.00th=[ 326], 90.00th=[ 401], 95.00th=[ 430], 00:25:47.473 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 735], 99.95th=[ 751], 00:25:47.473 | 99.99th=[ 768] 00:25:47.473 bw ( KiB/s): min=36864, max=331776, per=8.27%, avg=89958.40, stdev=66597.94, samples=20 00:25:47.473 iops : min= 144, max= 1296, avg=351.40, stdev=260.15, samples=20 00:25:47.473 lat (usec) : 1000=0.11% 00:25:47.473 lat (msec) : 2=0.39%, 4=0.64%, 10=3.58%, 20=5.20%, 50=19.04% 00:25:47.473 lat (msec) : 100=7.55%, 250=32.71%, 500=27.84%, 750=2.88%, 1000=0.06% 00:25:47.473 cpu : usr=0.71%, sys=1.28%, ctx=2345, majf=0, minf=2 00:25:47.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:25:47.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.473 issued rwts: total=0,3577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.473 job9: (groupid=0, jobs=1): err= 0: pid=391604: Sat Dec 14 22:35:07 2024 00:25:47.473 write: IOPS=436, BW=109MiB/s (114MB/s)(1105MiB/10130msec); 0 zone resets 00:25:47.473 slat (usec): min=29, max=178877, avg=2215.03, stdev=6590.66 00:25:47.473 clat (msec): min=27, max=759, avg=144.43, stdev=139.15 00:25:47.473 lat (msec): min=27, max=759, avg=146.65, stdev=141.07 00:25:47.473 clat percentiles (msec): 00:25:47.473 | 1.00th=[ 38], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 47], 00:25:47.473 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 102], 00:25:47.473 | 70.00th=[ 155], 80.00th=[ 226], 90.00th=[ 351], 95.00th=[ 447], 00:25:47.473 | 99.00th=[ 667], 99.50th=[ 709], 99.90th=[ 743], 99.95th=[ 760], 00:25:47.473 | 99.99th=[ 760] 00:25:47.473 bw ( KiB/s): min=22528, max=314880, per=10.25%, avg=111467.90, stdev=94774.32, samples=20 00:25:47.473 iops : min= 88, max= 1230, avg=435.40, stdev=370.23, samples=20 00:25:47.473 lat (msec) : 50=25.03%, 100=34.68%, 250=22.82%, 500=13.47%, 750=3.94% 00:25:47.473 lat (msec) : 1000=0.07% 00:25:47.473 cpu : usr=0.95%, sys=1.52%, ctx=1124, majf=0, minf=1 00:25:47.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:47.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.474 issued rwts: total=0,4418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.474 job10: (groupid=0, jobs=1): err= 0: pid=391610: Sat Dec 14 22:35:07 2024 00:25:47.474 write: IOPS=439, BW=110MiB/s (115MB/s)(1106MiB/10061msec); 0 zone resets 00:25:47.474 slat (usec): min=21, max=143088, avg=1567.27, stdev=5621.96 00:25:47.474 clat (usec): min=823, max=912066, avg=143935.59, stdev=147420.79 00:25:47.474 lat (usec): min=875, max=920545, avg=145502.86, stdev=149093.32 00:25:47.474 clat percentiles (msec): 00:25:47.474 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 19], 00:25:47.474 | 30.00th=[ 34], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 150], 00:25:47.474 | 70.00th=[ 232], 80.00th=[ 288], 90.00th=[ 347], 95.00th=[ 368], 00:25:47.474 | 99.00th=[ 684], 99.50th=[ 735], 99.90th=[ 894], 99.95th=[ 902], 00:25:47.474 | 99.99th=[ 911] 00:25:47.474 bw ( KiB/s): min=33280, max=325120, per=10.26%, avg=111641.60, stdev=78687.46, samples=20 00:25:47.474 iops : min= 130, max= 1270, avg=436.10, stdev=307.37, samples=20 00:25:47.474 lat (usec) : 1000=0.11% 00:25:47.474 lat (msec) : 2=0.72%, 4=2.76%, 10=7.69%, 20=10.13%, 50=17.18% 00:25:47.474 lat (msec) : 100=16.21%, 250=17.22%, 500=26.13%, 750=1.49%, 1000=0.36% 00:25:47.474 cpu : usr=1.10%, sys=1.30%, ctx=2961, majf=0, minf=1 00:25:47.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:47.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.474 issued rwts: total=0,4424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.474 00:25:47.474 Run status group 0 (all jobs): 00:25:47.474 WRITE: bw=1062MiB/s (1114MB/s), 75.4MiB/s-127MiB/s (79.1MB/s-134MB/s), io=10.5GiB (11.3GB), run=10058-10154msec 00:25:47.474 00:25:47.474 Disk stats (read/write): 00:25:47.474 nvme0n1: ios=48/10077, merge=0/0, ticks=4044/1213058, in_queue=1217102, util=99.90% 00:25:47.474 nvme10n1: ios=44/7373, merge=0/0, ticks=44/1220563, in_queue=1220607, util=97.55% 00:25:47.474 nvme1n1: ios=46/5877, merge=0/0, ticks=3996/1185072, in_queue=1189068, util=100.00% 00:25:47.474 nvme2n1: ios=46/5872, merge=0/0, ticks=291/1209992, in_queue=1210283, util=100.00% 00:25:47.474 nvme3n1: ios=20/6744, merge=0/0, ticks=316/1184994, in_queue=1185310, util=98.30% 00:25:47.474 nvme4n1: ios=0/8040, merge=0/0, ticks=0/1215797, in_queue=1215797, util=98.11% 00:25:47.474 nvme5n1: ios=43/6255, merge=0/0, ticks=4597/1179751, in_queue=1184348, util=100.00% 00:25:47.474 nvme6n1: ios=37/9620, merge=0/0, ticks=1449/1213545, in_queue=1214994, util=100.00% 00:25:47.474 nvme7n1: ios=45/6966, merge=0/0, ticks=73/1213403, in_queue=1213476, util=99.05% 00:25:47.474 nvme8n1: ios=38/8654, merge=0/0, ticks=2136/1169145, in_queue=1171281, util=100.00% 00:25:47.474 nvme9n1: ios=0/8632, merge=0/0, ticks=0/1209394, in_queue=1209394, util=99.05% 00:25:47.474 22:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:47.474 22:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:47.474 22:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.474 22:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:47.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.474 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:47.733 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:47.733 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.994 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:48.253 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.253 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:48.512 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.512 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:48.772 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.772 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:49.030 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.030 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.288 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.288 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.288 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:49.547 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.547 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:49.806 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:49.806 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.806 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:50.065 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:50.065 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:50.065 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.066 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.066 rmmod nvme_tcp 00:25:50.325 rmmod nvme_fabrics 00:25:50.325 rmmod nvme_keyring 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 384013 ']' 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 384013 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 384013 ']' 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 384013 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.325 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384013 00:25:50.325 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.325 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.325 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384013' 00:25:50.325 killing process with pid 384013 00:25:50.325 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 384013 00:25:50.325 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 384013 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.584 22:35:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.120 00:25:53.120 real 1m10.895s 00:25:53.120 user 4m17.045s 00:25:53.120 sys 0m17.190s 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.120 ************************************ 00:25:53.120 END TEST nvmf_multiconnection 00:25:53.120 ************************************ 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:53.120 ************************************ 00:25:53.120 START TEST nvmf_initiator_timeout 00:25:53.120 ************************************ 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.120 * Looking for test storage... 00:25:53.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.120 --rc genhtml_branch_coverage=1 00:25:53.120 --rc genhtml_function_coverage=1 00:25:53.120 --rc genhtml_legend=1 00:25:53.120 --rc geninfo_all_blocks=1 00:25:53.120 --rc geninfo_unexecuted_blocks=1 00:25:53.120 00:25:53.120 ' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.120 --rc genhtml_branch_coverage=1 00:25:53.120 --rc genhtml_function_coverage=1 00:25:53.120 --rc genhtml_legend=1 00:25:53.120 --rc geninfo_all_blocks=1 00:25:53.120 --rc geninfo_unexecuted_blocks=1 00:25:53.120 00:25:53.120 ' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.120 --rc genhtml_branch_coverage=1 00:25:53.120 --rc genhtml_function_coverage=1 00:25:53.120 --rc genhtml_legend=1 00:25:53.120 --rc geninfo_all_blocks=1 00:25:53.120 --rc geninfo_unexecuted_blocks=1 00:25:53.120 00:25:53.120 ' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.120 --rc genhtml_branch_coverage=1 00:25:53.120 --rc genhtml_function_coverage=1 00:25:53.120 --rc genhtml_legend=1 00:25:53.120 --rc geninfo_all_blocks=1 00:25:53.120 --rc geninfo_unexecuted_blocks=1 00:25:53.120 00:25:53.120 ' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.120 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.121 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:59.693 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:59.693 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:59.693 Found net devices under 0000:af:00.0: cvl_0_0 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:59.693 Found net devices under 0000:af:00.1: cvl_0_1 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.693 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:25:59.694 00:25:59.694 --- 10.0.0.2 ping statistics --- 00:25:59.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.694 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:25:59.694 00:25:59.694 --- 10.0.0.1 ping statistics --- 00:25:59.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.694 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=396709 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 396709 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 396709 ']' 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 [2024-12-14 22:35:19.776370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:59.694 [2024-12-14 22:35:19.776417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.694 [2024-12-14 22:35:19.856090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.694 [2024-12-14 22:35:19.879431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.694 [2024-12-14 22:35:19.879468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.694 [2024-12-14 22:35:19.879475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.694 [2024-12-14 22:35:19.879481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.694 [2024-12-14 22:35:19.879486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.694 [2024-12-14 22:35:19.880769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.694 [2024-12-14 22:35:19.880882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.694 [2024-12-14 22:35:19.880989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.694 [2024-12-14 22:35:19.880989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.694 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 Malloc0 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 Delay0 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 [2024-12-14 22:35:20.068570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.694 [2024-12-14 22:35:20.101840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.694 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:00.632 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:00.632 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:00.632 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.632 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:00.632 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=397273 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:02.537 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:02.537 [global] 00:26:02.537 thread=1 00:26:02.537 invalidate=1 00:26:02.537 rw=write 00:26:02.537 time_based=1 00:26:02.537 runtime=60 00:26:02.537 ioengine=libaio 00:26:02.537 direct=1 00:26:02.537 bs=4096 00:26:02.537 iodepth=1 00:26:02.537 norandommap=0 00:26:02.537 numjobs=1 00:26:02.537 00:26:02.537 verify_dump=1 00:26:02.537 verify_backlog=512 00:26:02.537 verify_state_save=0 00:26:02.537 do_verify=1 00:26:02.537 verify=crc32c-intel 00:26:02.537 [job0] 00:26:02.537 filename=/dev/nvme0n1 00:26:02.537 Could not set queue depth (nvme0n1) 00:26:02.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:02.797 fio-3.35 00:26:02.797 Starting 1 thread 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.087 true 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.087 true 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.087 true 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:06.087 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.088 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.088 true 00:26:06.088 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.088 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.624 true 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.624 true 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.624 true 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.624 true 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:08.624 22:35:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 397273 00:27:04.868 00:27:04.868 job0: (groupid=0, jobs=1): err= 0: pid=397520: Sat Dec 14 22:36:23 2024 00:27:04.868 read: IOPS=8, BW=35.2KiB/s (36.0kB/s)(2112KiB/60015msec) 00:27:04.868 slat (usec): min=6, max=7052, avg=46.50, stdev=413.30 00:27:04.868 clat (usec): min=212, max=41712k, avg=113107.23, stdev=1813846.77 00:27:04.868 lat (usec): min=220, max=41712k, avg=113153.73, stdev=1813845.80 00:27:04.868 clat percentiles (usec): 00:27:04.868 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 258], 00:27:04.868 | 20.00th=[ 40633], 30.00th=[ 41157], 40.00th=[ 41157], 00:27:04.868 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:27:04.868 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:04.868 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[17112761], 00:27:04.868 | 99.95th=[17112761], 99.99th=[17112761] 00:27:04.868 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60015msec); 0 zone resets 00:27:04.868 slat (usec): min=9, max=29537, avg=40.57, stdev=922.68 00:27:04.868 clat (usec): min=151, max=374, avg=215.32, stdev=29.25 00:27:04.868 lat (usec): min=165, max=29910, avg=255.90, stdev=928.08 00:27:04.868 clat percentiles (usec): 00:27:04.868 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 184], 00:27:04.868 | 30.00th=[ 192], 40.00th=[ 212], 50.00th=[ 227], 60.00th=[ 235], 00:27:04.868 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 247], 00:27:04.868 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 375], 99.95th=[ 375], 00:27:04.868 | 99.99th=[ 375] 00:27:04.868 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:27:04.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:27:04.868 lat (usec) : 250=66.62%, 500=4.96%, 750=0.06% 00:27:04.868 lat (msec) : 50=28.29%, >=2000=0.06% 00:27:04.868 cpu : usr=0.03%, sys=0.04%, ctx=1557, majf=0, minf=1 00:27:04.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:04.869 00:27:04.869 Run status group 0 (all jobs): 00:27:04.869 READ: bw=35.2KiB/s (36.0kB/s), 35.2KiB/s-35.2KiB/s (36.0kB/s-36.0kB/s), io=2112KiB (2163kB), run=60015-60015msec 00:27:04.869 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60015-60015msec 00:27:04.869 00:27:04.869 Disk stats (read/write): 00:27:04.869 nvme0n1: ios=577/1024, merge=0/0, ticks=18171/210, in_queue=18381, util=99.75% 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:04.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:04.869 nvmf hotplug test: fio successful as expected 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.869 rmmod nvme_tcp 00:27:04.869 rmmod nvme_fabrics 00:27:04.869 rmmod nvme_keyring 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 396709 ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 396709 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 396709 ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 396709 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396709 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396709' 00:27:04.869 killing process with pid 396709 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 396709 00:27:04.869 22:36:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 396709 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.869 22:36:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.437 00:27:05.437 real 1m12.576s 00:27:05.437 user 4m22.021s 00:27:05.437 sys 0m6.357s 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.437 ************************************ 00:27:05.437 END TEST nvmf_initiator_timeout 00:27:05.437 ************************************ 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.437 22:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:12.158 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:12.158 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.158 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:12.159 Found net devices under 0000:af:00.0: cvl_0_0 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:12.159 Found net devices under 0000:af:00.1: cvl_0_1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:12.159 ************************************ 00:27:12.159 START TEST nvmf_perf_adq 00:27:12.159 ************************************ 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:12.159 * Looking for test storage... 00:27:12.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:12.159 22:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.159 --rc genhtml_branch_coverage=1 00:27:12.159 --rc genhtml_function_coverage=1 00:27:12.159 --rc genhtml_legend=1 00:27:12.159 --rc geninfo_all_blocks=1 00:27:12.159 --rc geninfo_unexecuted_blocks=1 00:27:12.159 00:27:12.159 ' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.159 --rc genhtml_branch_coverage=1 00:27:12.159 --rc genhtml_function_coverage=1 00:27:12.159 --rc genhtml_legend=1 00:27:12.159 --rc geninfo_all_blocks=1 00:27:12.159 --rc geninfo_unexecuted_blocks=1 00:27:12.159 00:27:12.159 ' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.159 --rc genhtml_branch_coverage=1 00:27:12.159 --rc genhtml_function_coverage=1 00:27:12.159 --rc genhtml_legend=1 00:27:12.159 --rc geninfo_all_blocks=1 00:27:12.159 --rc geninfo_unexecuted_blocks=1 00:27:12.159 00:27:12.159 ' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.159 --rc genhtml_branch_coverage=1 00:27:12.159 --rc genhtml_function_coverage=1 00:27:12.159 --rc genhtml_legend=1 00:27:12.159 --rc geninfo_all_blocks=1 00:27:12.159 --rc geninfo_unexecuted_blocks=1 00:27:12.159 00:27:12.159 ' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.159 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.160 22:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.583 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:17.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:17.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:17.584 Found net devices under 0000:af:00.0: cvl_0_0 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:17.584 Found net devices under 0000:af:00.1: cvl_0_1 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:17.584 22:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:17.856 22:36:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:22.045 22:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:27.314 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:27.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:27.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:27.315 Found net devices under 0000:af:00.0: cvl_0_0 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:27.315 Found net devices under 0000:af:00.1: cvl_0_1 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.315 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:27.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:27:27.316 00:27:27.316 --- 10.0.0.2 ping statistics --- 00:27:27.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.316 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:27.316 00:27:27.316 --- 10.0.0.1 ping statistics --- 00:27:27.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.316 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=415785 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 415785 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 415785 ']' 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 [2024-12-14 22:36:47.596810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:27.316 [2024-12-14 22:36:47.596856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.316 [2024-12-14 22:36:47.676167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.316 [2024-12-14 22:36:47.699621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.316 [2024-12-14 22:36:47.699659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.316 [2024-12-14 22:36:47.699671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.316 [2024-12-14 22:36:47.699678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.316 [2024-12-14 22:36:47.699684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.316 [2024-12-14 22:36:47.701087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.316 [2024-12-14 22:36:47.701199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.316 [2024-12-14 22:36:47.701224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.316 [2024-12-14 22:36:47.701225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:27.316 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.317 [2024-12-14 22:36:47.922405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.317 Malloc1 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.317 [2024-12-14 22:36:47.989581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=415816 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:27.317 22:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:29.218 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:29.218 "tick_rate": 2100000000, 00:27:29.218 "poll_groups": [ 00:27:29.218 { 00:27:29.218 "name": "nvmf_tgt_poll_group_000", 00:27:29.218 "admin_qpairs": 1, 00:27:29.218 "io_qpairs": 1, 00:27:29.218 "current_admin_qpairs": 1, 00:27:29.218 "current_io_qpairs": 1, 00:27:29.218 "pending_bdev_io": 0, 00:27:29.218 "completed_nvme_io": 19208, 00:27:29.218 "transports": [ 00:27:29.218 { 00:27:29.218 "trtype": "TCP" 00:27:29.218 } 00:27:29.218 ] 00:27:29.218 }, 00:27:29.218 { 00:27:29.218 "name": "nvmf_tgt_poll_group_001", 00:27:29.218 "admin_qpairs": 0, 00:27:29.218 "io_qpairs": 1, 00:27:29.218 "current_admin_qpairs": 0, 00:27:29.218 "current_io_qpairs": 1, 00:27:29.218 "pending_bdev_io": 0, 00:27:29.218 "completed_nvme_io": 19305, 00:27:29.218 "transports": [ 00:27:29.218 { 00:27:29.218 "trtype": "TCP" 00:27:29.218 } 00:27:29.218 ] 00:27:29.218 }, 00:27:29.218 { 00:27:29.218 "name": "nvmf_tgt_poll_group_002", 00:27:29.218 "admin_qpairs": 0, 00:27:29.218 "io_qpairs": 1, 00:27:29.218 "current_admin_qpairs": 0, 00:27:29.218 "current_io_qpairs": 1, 00:27:29.218 "pending_bdev_io": 0, 00:27:29.218 "completed_nvme_io": 19526, 00:27:29.218 "transports": [ 00:27:29.218 { 00:27:29.218 "trtype": "TCP" 00:27:29.218 } 00:27:29.218 ] 00:27:29.218 }, 00:27:29.218 { 00:27:29.218 "name": "nvmf_tgt_poll_group_003", 00:27:29.218 "admin_qpairs": 0, 00:27:29.218 "io_qpairs": 1, 00:27:29.218 "current_admin_qpairs": 0, 00:27:29.218 "current_io_qpairs": 1, 00:27:29.218 "pending_bdev_io": 0, 00:27:29.218 "completed_nvme_io": 19142, 00:27:29.218 "transports": [ 00:27:29.218 { 00:27:29.218 "trtype": "TCP" 00:27:29.218 } 00:27:29.218 ] 00:27:29.218 } 00:27:29.218 ] 00:27:29.218 }' 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:29.218 22:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 415816 00:27:39.197 Initializing NVMe Controllers 00:27:39.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:39.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:39.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:39.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:39.197 Initialization complete. Launching workers. 00:27:39.197 ======================================================== 00:27:39.197 Latency(us) 00:27:39.197 Device Information : IOPS MiB/s Average min max 00:27:39.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10637.70 41.55 6015.60 2385.57 10633.49 00:27:39.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10702.30 41.81 5979.79 1736.07 9947.13 00:27:39.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10852.10 42.39 5897.97 1997.66 10383.18 00:27:39.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10703.90 41.81 5980.22 1592.25 10671.08 00:27:39.197 ======================================================== 00:27:39.197 Total : 42895.99 167.56 5968.08 1592.25 10671.08 00:27:39.197 00:27:39.197 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:39.197 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:39.197 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:39.197 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.198 rmmod nvme_tcp 00:27:39.198 rmmod nvme_fabrics 00:27:39.198 rmmod nvme_keyring 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 415785 ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 415785 ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415785' 00:27:39.198 killing process with pid 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 415785 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.198 22:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.766 22:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.766 22:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:39.766 22:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:39.766 22:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:41.145 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:43.677 22:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:48.952 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:48.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:48.952 Found net devices under 0000:af:00.0: cvl_0_0 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:48.952 Found net devices under 0000:af:00.1: cvl_0_1 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.952 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:48.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:27:48.953 00:27:48.953 --- 10.0.0.2 ping statistics --- 00:27:48.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.953 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:48.953 00:27:48.953 --- 10.0.0.1 ping statistics --- 00:27:48.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.953 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:48.953 net.core.busy_poll = 1 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:48.953 net.core.busy_read = 1 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:48.953 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419688 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419688 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419688 ']' 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.211 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.211 [2024-12-14 22:37:10.050136] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:49.211 [2024-12-14 22:37:10.050189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.469 [2024-12-14 22:37:10.131323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.469 [2024-12-14 22:37:10.154949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.469 [2024-12-14 22:37:10.154984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.469 [2024-12-14 22:37:10.154991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.469 [2024-12-14 22:37:10.154997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.469 [2024-12-14 22:37:10.155002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.469 [2024-12-14 22:37:10.156435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.469 [2024-12-14 22:37:10.156482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.469 [2024-12-14 22:37:10.156588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.469 [2024-12-14 22:37:10.156589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.469 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.470 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 [2024-12-14 22:37:10.380602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 Malloc1 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.728 [2024-12-14 22:37:10.442993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419865 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:49.728 22:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:51.634 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:51.634 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.634 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.634 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.634 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:51.634 "tick_rate": 2100000000, 00:27:51.634 "poll_groups": [ 00:27:51.634 { 00:27:51.634 "name": "nvmf_tgt_poll_group_000", 00:27:51.634 "admin_qpairs": 1, 00:27:51.634 "io_qpairs": 2, 00:27:51.634 "current_admin_qpairs": 1, 00:27:51.634 "current_io_qpairs": 2, 00:27:51.634 "pending_bdev_io": 0, 00:27:51.634 "completed_nvme_io": 26944, 00:27:51.635 "transports": [ 00:27:51.635 { 00:27:51.635 "trtype": "TCP" 00:27:51.635 } 00:27:51.635 ] 00:27:51.635 }, 00:27:51.635 { 00:27:51.635 "name": "nvmf_tgt_poll_group_001", 00:27:51.635 "admin_qpairs": 0, 00:27:51.635 "io_qpairs": 2, 00:27:51.635 "current_admin_qpairs": 0, 00:27:51.635 "current_io_qpairs": 2, 00:27:51.635 "pending_bdev_io": 0, 00:27:51.635 "completed_nvme_io": 27811, 00:27:51.635 "transports": [ 00:27:51.635 { 00:27:51.635 "trtype": "TCP" 00:27:51.635 } 00:27:51.635 ] 00:27:51.635 }, 00:27:51.635 { 00:27:51.635 "name": "nvmf_tgt_poll_group_002", 00:27:51.635 "admin_qpairs": 0, 00:27:51.635 "io_qpairs": 0, 00:27:51.635 "current_admin_qpairs": 0, 00:27:51.635 "current_io_qpairs": 0, 00:27:51.635 "pending_bdev_io": 0, 00:27:51.635 "completed_nvme_io": 0, 00:27:51.635 "transports": [ 00:27:51.635 { 00:27:51.635 "trtype": "TCP" 00:27:51.635 } 00:27:51.635 ] 00:27:51.635 }, 00:27:51.635 { 00:27:51.635 "name": "nvmf_tgt_poll_group_003", 00:27:51.635 "admin_qpairs": 0, 00:27:51.635 "io_qpairs": 0, 00:27:51.635 "current_admin_qpairs": 0, 00:27:51.635 "current_io_qpairs": 0, 00:27:51.635 "pending_bdev_io": 0, 00:27:51.635 "completed_nvme_io": 0, 00:27:51.635 "transports": [ 00:27:51.635 { 00:27:51.635 "trtype": "TCP" 00:27:51.635 } 00:27:51.635 ] 00:27:51.635 } 00:27:51.635 ] 00:27:51.635 }' 00:27:51.635 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:51.635 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:51.635 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:51.894 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:51.894 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419865 00:28:00.018 Initializing NVMe Controllers 00:28:00.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:00.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:00.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:00.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:00.018 Initialization complete. Launching workers. 00:28:00.018 ======================================================== 00:28:00.018 Latency(us) 00:28:00.018 Device Information : IOPS MiB/s Average min max 00:28:00.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7898.10 30.85 8103.31 1272.89 52964.10 00:28:00.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6872.70 26.85 9346.45 1565.26 53844.24 00:28:00.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6886.70 26.90 9305.19 1555.43 53555.63 00:28:00.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7928.80 30.97 8071.88 1470.02 53073.09 00:28:00.018 ======================================================== 00:28:00.018 Total : 29586.30 115.57 8663.42 1272.89 53844.24 00:28:00.018 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.018 rmmod nvme_tcp 00:28:00.018 rmmod nvme_fabrics 00:28:00.018 rmmod nvme_keyring 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419688 ']' 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419688 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419688 ']' 00:28:00.018 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419688 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419688 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419688' 00:28:00.019 killing process with pid 419688 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419688 00:28:00.019 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419688 00:28:00.277 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.278 22:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:03.568 00:28:03.568 real 0m52.188s 00:28:03.568 user 2m44.545s 00:28:03.568 sys 0m11.328s 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.568 ************************************ 00:28:03.568 END TEST nvmf_perf_adq 00:28:03.568 ************************************ 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:03.568 ************************************ 00:28:03.568 START TEST nvmf_shutdown 00:28:03.568 ************************************ 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:03.568 * Looking for test storage... 00:28:03.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.568 --rc genhtml_branch_coverage=1 00:28:03.568 --rc genhtml_function_coverage=1 00:28:03.568 --rc genhtml_legend=1 00:28:03.568 --rc geninfo_all_blocks=1 00:28:03.568 --rc geninfo_unexecuted_blocks=1 00:28:03.568 00:28:03.568 ' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.568 --rc genhtml_branch_coverage=1 00:28:03.568 --rc genhtml_function_coverage=1 00:28:03.568 --rc genhtml_legend=1 00:28:03.568 --rc geninfo_all_blocks=1 00:28:03.568 --rc geninfo_unexecuted_blocks=1 00:28:03.568 00:28:03.568 ' 00:28:03.568 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:03.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.568 --rc genhtml_branch_coverage=1 00:28:03.568 --rc genhtml_function_coverage=1 00:28:03.568 --rc genhtml_legend=1 00:28:03.568 --rc geninfo_all_blocks=1 00:28:03.568 --rc geninfo_unexecuted_blocks=1 00:28:03.568 00:28:03.568 ' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:03.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.569 --rc genhtml_branch_coverage=1 00:28:03.569 --rc genhtml_function_coverage=1 00:28:03.569 --rc genhtml_legend=1 00:28:03.569 --rc geninfo_all_blocks=1 00:28:03.569 --rc geninfo_unexecuted_blocks=1 00:28:03.569 00:28:03.569 ' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.569 ************************************ 00:28:03.569 START TEST nvmf_shutdown_tc1 00:28:03.569 ************************************ 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.569 22:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.138 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.138 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.138 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:10.138 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:10.138 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.138 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:10.139 Found net devices under 0000:af:00.0: cvl_0_0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:10.139 Found net devices under 0000:af:00.1: cvl_0_1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:28:10.139 00:28:10.139 --- 10.0.0.2 ping statistics --- 00:28:10.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.139 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:10.139 00:28:10.139 --- 10.0.0.1 ping statistics --- 00:28:10.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.139 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425202 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425202 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425202 ']' 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.139 [2024-12-14 22:37:30.355587] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:10.139 [2024-12-14 22:37:30.355630] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.139 [2024-12-14 22:37:30.431285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.139 [2024-12-14 22:37:30.453995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.139 [2024-12-14 22:37:30.454033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.139 [2024-12-14 22:37:30.454041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.139 [2024-12-14 22:37:30.454048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.139 [2024-12-14 22:37:30.454053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.139 [2024-12-14 22:37:30.455390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.139 [2024-12-14 22:37:30.455479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.139 [2024-12-14 22:37:30.455607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.139 [2024-12-14 22:37:30.455609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.139 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.140 [2024-12-14 22:37:30.586602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.140 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.140 Malloc1 00:28:10.140 [2024-12-14 22:37:30.695492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.140 Malloc2 00:28:10.140 Malloc3 00:28:10.140 Malloc4 00:28:10.140 Malloc5 00:28:10.140 Malloc6 00:28:10.140 Malloc7 00:28:10.140 Malloc8 00:28:10.140 Malloc9 00:28:10.400 Malloc10 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425262 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425262 /var/tmp/bdevperf.sock 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425262 ']' 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.400 "hdgst": ${hdgst:-false}, 00:28:10.400 "ddgst": ${ddgst:-false} 00:28:10.400 }, 00:28:10.400 "method": "bdev_nvme_attach_controller" 00:28:10.400 } 00:28:10.400 EOF 00:28:10.400 )") 00:28:10.400 [2024-12-14 22:37:31.166707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:10.400 [2024-12-14 22:37:31.166754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.400 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.400 { 00:28:10.400 "params": { 00:28:10.400 "name": "Nvme$subsystem", 00:28:10.400 "trtype": "$TEST_TRANSPORT", 00:28:10.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.400 "adrfam": "ipv4", 00:28:10.400 "trsvcid": "$NVMF_PORT", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.401 "hdgst": ${hdgst:-false}, 00:28:10.401 "ddgst": ${ddgst:-false} 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 } 00:28:10.401 EOF 00:28:10.401 )") 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.401 { 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme$subsystem", 00:28:10.401 "trtype": "$TEST_TRANSPORT", 00:28:10.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "$NVMF_PORT", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.401 "hdgst": ${hdgst:-false}, 00:28:10.401 "ddgst": ${ddgst:-false} 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 } 00:28:10.401 EOF 00:28:10.401 )") 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.401 { 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme$subsystem", 00:28:10.401 "trtype": "$TEST_TRANSPORT", 00:28:10.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "$NVMF_PORT", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.401 "hdgst": ${hdgst:-false}, 00:28:10.401 "ddgst": ${ddgst:-false} 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 } 00:28:10.401 EOF 00:28:10.401 )") 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:10.401 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme1", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme2", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme3", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme4", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme5", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme6", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme7", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme8", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme9", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 },{ 00:28:10.401 "params": { 00:28:10.401 "name": "Nvme10", 00:28:10.401 "trtype": "tcp", 00:28:10.401 "traddr": "10.0.0.2", 00:28:10.401 "adrfam": "ipv4", 00:28:10.401 "trsvcid": "4420", 00:28:10.401 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:10.401 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:10.401 "hdgst": false, 00:28:10.401 "ddgst": false 00:28:10.401 }, 00:28:10.401 "method": "bdev_nvme_attach_controller" 00:28:10.401 }' 00:28:10.401 [2024-12-14 22:37:31.241170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.401 [2024-12-14 22:37:31.263451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425262 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:12.305 22:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:13.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425262 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425202 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.242 { 00:28:13.242 "params": { 00:28:13.242 "name": "Nvme$subsystem", 00:28:13.242 "trtype": "$TEST_TRANSPORT", 00:28:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.242 "adrfam": "ipv4", 00:28:13.242 "trsvcid": "$NVMF_PORT", 00:28:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.242 "hdgst": ${hdgst:-false}, 00:28:13.242 "ddgst": ${ddgst:-false} 00:28:13.242 }, 00:28:13.242 "method": "bdev_nvme_attach_controller" 00:28:13.242 } 00:28:13.242 EOF 00:28:13.242 )") 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.242 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.243 { 00:28:13.243 "params": { 00:28:13.243 "name": "Nvme$subsystem", 00:28:13.243 "trtype": "$TEST_TRANSPORT", 00:28:13.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.243 "adrfam": "ipv4", 00:28:13.243 "trsvcid": "$NVMF_PORT", 00:28:13.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.243 "hdgst": ${hdgst:-false}, 00:28:13.243 "ddgst": ${ddgst:-false} 00:28:13.243 }, 00:28:13.243 "method": "bdev_nvme_attach_controller" 00:28:13.243 } 00:28:13.243 EOF 00:28:13.243 )") 00:28:13.243 [2024-12-14 22:37:34.110246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:13.243 [2024-12-14 22:37:34.110291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425751 ] 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.243 { 00:28:13.243 "params": { 00:28:13.243 "name": "Nvme$subsystem", 00:28:13.243 "trtype": "$TEST_TRANSPORT", 00:28:13.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.243 "adrfam": "ipv4", 00:28:13.243 "trsvcid": "$NVMF_PORT", 00:28:13.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.243 "hdgst": ${hdgst:-false}, 00:28:13.243 "ddgst": ${ddgst:-false} 00:28:13.243 }, 00:28:13.243 "method": "bdev_nvme_attach_controller" 00:28:13.243 } 00:28:13.243 EOF 00:28:13.243 )") 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.243 { 00:28:13.243 "params": { 00:28:13.243 "name": "Nvme$subsystem", 00:28:13.243 "trtype": "$TEST_TRANSPORT", 00:28:13.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.243 "adrfam": "ipv4", 00:28:13.243 "trsvcid": "$NVMF_PORT", 00:28:13.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.243 "hdgst": ${hdgst:-false}, 00:28:13.243 "ddgst": ${ddgst:-false} 00:28:13.243 }, 00:28:13.243 "method": "bdev_nvme_attach_controller" 00:28:13.243 } 00:28:13.243 EOF 00:28:13.243 )") 00:28:13.243 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.501 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.501 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.501 { 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme$subsystem", 00:28:13.502 "trtype": "$TEST_TRANSPORT", 00:28:13.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "$NVMF_PORT", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.502 "hdgst": ${hdgst:-false}, 00:28:13.502 "ddgst": ${ddgst:-false} 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 } 00:28:13.502 EOF 00:28:13.502 )") 00:28:13.502 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.502 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:13.502 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:13.502 22:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme1", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme2", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme3", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme4", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme5", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme6", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme7", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme8", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme9", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 },{ 00:28:13.502 "params": { 00:28:13.502 "name": "Nvme10", 00:28:13.502 "trtype": "tcp", 00:28:13.502 "traddr": "10.0.0.2", 00:28:13.502 "adrfam": "ipv4", 00:28:13.502 "trsvcid": "4420", 00:28:13.502 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:13.502 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:13.502 "hdgst": false, 00:28:13.502 "ddgst": false 00:28:13.502 }, 00:28:13.502 "method": "bdev_nvme_attach_controller" 00:28:13.502 }' 00:28:13.502 [2024-12-14 22:37:34.184919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.502 [2024-12-14 22:37:34.207278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.878 Running I/O for 1 seconds... 00:28:16.072 2252.00 IOPS, 140.75 MiB/s 00:28:16.072 Latency(us) 00:28:16.072 [2024-12-14T21:37:36.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme1n1 : 1.04 245.00 15.31 0.00 0.00 258460.40 15728.64 222697.57 00:28:16.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme2n1 : 1.12 285.18 17.82 0.00 0.00 219309.40 23468.13 201726.05 00:28:16.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme3n1 : 1.12 288.02 18.00 0.00 0.00 213107.54 7146.54 210713.84 00:28:16.072 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme4n1 : 1.12 290.24 18.14 0.00 0.00 208802.52 2668.25 214708.42 00:28:16.072 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme5n1 : 1.13 282.87 17.68 0.00 0.00 211822.59 15791.06 209715.20 00:28:16.072 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme6n1 : 1.14 279.71 17.48 0.00 0.00 211256.95 15728.64 217704.35 00:28:16.072 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme7n1 : 1.14 280.87 17.55 0.00 0.00 207177.34 13856.18 223696.21 00:28:16.072 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme8n1 : 1.14 281.86 17.62 0.00 0.00 203210.56 18100.42 215707.06 00:28:16.072 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme9n1 : 1.15 279.16 17.45 0.00 0.00 202160.03 17101.78 218702.99 00:28:16.072 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.072 Verification LBA range: start 0x0 length 0x400 00:28:16.072 Nvme10n1 : 1.15 281.93 17.62 0.00 0.00 197440.68 15603.81 232684.01 00:28:16.072 [2024-12-14T21:37:36.956Z] =================================================================================================================== 00:28:16.072 [2024-12-14T21:37:36.956Z] Total : 2794.84 174.68 0.00 0.00 212330.15 2668.25 232684.01 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.331 22:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.331 rmmod nvme_tcp 00:28:16.331 rmmod nvme_fabrics 00:28:16.331 rmmod nvme_keyring 00:28:16.331 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425202 ']' 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425202 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425202 ']' 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425202 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425202 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425202' 00:28:16.332 killing process with pid 425202 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425202 00:28:16.332 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425202 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.591 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.126 00:28:19.126 real 0m15.198s 00:28:19.126 user 0m33.983s 00:28:19.126 sys 0m5.714s 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.126 ************************************ 00:28:19.126 END TEST nvmf_shutdown_tc1 00:28:19.126 ************************************ 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.126 ************************************ 00:28:19.126 START TEST nvmf_shutdown_tc2 00:28:19.126 ************************************ 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:19.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:19.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.126 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:19.127 Found net devices under 0000:af:00.0: cvl_0_0 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:19.127 Found net devices under 0000:af:00.1: cvl_0_1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:28:19.127 00:28:19.127 --- 10.0.0.2 ping statistics --- 00:28:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.127 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:19.127 00:28:19.127 --- 10.0.0.1 ping statistics --- 00:28:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.127 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=426853 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 426853 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426853 ']' 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.127 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.387 [2024-12-14 22:37:40.018311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:19.387 [2024-12-14 22:37:40.018363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.387 [2024-12-14 22:37:40.099689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.387 [2024-12-14 22:37:40.122908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.387 [2024-12-14 22:37:40.122949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.387 [2024-12-14 22:37:40.122956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.387 [2024-12-14 22:37:40.122963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.387 [2024-12-14 22:37:40.122968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.387 [2024-12-14 22:37:40.124473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.387 [2024-12-14 22:37:40.124581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.387 [2024-12-14 22:37:40.124687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.387 [2024-12-14 22:37:40.124689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.387 [2024-12-14 22:37:40.256446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.387 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.647 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.647 Malloc1 00:28:19.647 [2024-12-14 22:37:40.373069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.647 Malloc2 00:28:19.647 Malloc3 00:28:19.647 Malloc4 00:28:19.647 Malloc5 00:28:19.906 Malloc6 00:28:19.906 Malloc7 00:28:19.906 Malloc8 00:28:19.906 Malloc9 00:28:19.906 Malloc10 00:28:19.906 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.906 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:19.906 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.906 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=427010 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 427010 /var/tmp/bdevperf.sock 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 427010 ']' 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:20.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.165 { 00:28:20.165 "params": { 00:28:20.165 "name": "Nvme$subsystem", 00:28:20.165 "trtype": "$TEST_TRANSPORT", 00:28:20.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.165 "adrfam": "ipv4", 00:28:20.165 "trsvcid": "$NVMF_PORT", 00:28:20.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.165 "hdgst": ${hdgst:-false}, 00:28:20.165 "ddgst": ${ddgst:-false} 00:28:20.165 }, 00:28:20.165 "method": "bdev_nvme_attach_controller" 00:28:20.165 } 00:28:20.165 EOF 00:28:20.165 )") 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.165 { 00:28:20.165 "params": { 00:28:20.165 "name": "Nvme$subsystem", 00:28:20.165 "trtype": "$TEST_TRANSPORT", 00:28:20.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.165 "adrfam": "ipv4", 00:28:20.165 "trsvcid": "$NVMF_PORT", 00:28:20.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.165 "hdgst": ${hdgst:-false}, 00:28:20.165 "ddgst": ${ddgst:-false} 00:28:20.165 }, 00:28:20.165 "method": "bdev_nvme_attach_controller" 00:28:20.165 } 00:28:20.165 EOF 00:28:20.165 )") 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.165 { 00:28:20.165 "params": { 00:28:20.165 "name": "Nvme$subsystem", 00:28:20.165 "trtype": "$TEST_TRANSPORT", 00:28:20.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.165 "adrfam": "ipv4", 00:28:20.165 "trsvcid": "$NVMF_PORT", 00:28:20.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.165 "hdgst": ${hdgst:-false}, 00:28:20.165 "ddgst": ${ddgst:-false} 00:28:20.165 }, 00:28:20.165 "method": "bdev_nvme_attach_controller" 00:28:20.165 } 00:28:20.165 EOF 00:28:20.165 )") 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.165 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.165 { 00:28:20.165 "params": { 00:28:20.165 "name": "Nvme$subsystem", 00:28:20.165 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 [2024-12-14 22:37:40.844808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:20.166 [2024-12-14 22:37:40.844856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427010 ] 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.166 { 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme$subsystem", 00:28:20.166 "trtype": "$TEST_TRANSPORT", 00:28:20.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "$NVMF_PORT", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.166 "hdgst": ${hdgst:-false}, 00:28:20.166 "ddgst": ${ddgst:-false} 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 } 00:28:20.166 EOF 00:28:20.166 )") 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:20.166 22:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme1", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme2", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme3", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme4", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme5", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme6", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme7", 00:28:20.166 "trtype": "tcp", 00:28:20.166 "traddr": "10.0.0.2", 00:28:20.166 "adrfam": "ipv4", 00:28:20.166 "trsvcid": "4420", 00:28:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:20.166 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:20.166 "hdgst": false, 00:28:20.166 "ddgst": false 00:28:20.166 }, 00:28:20.166 "method": "bdev_nvme_attach_controller" 00:28:20.166 },{ 00:28:20.166 "params": { 00:28:20.166 "name": "Nvme8", 00:28:20.167 "trtype": "tcp", 00:28:20.167 "traddr": "10.0.0.2", 00:28:20.167 "adrfam": "ipv4", 00:28:20.167 "trsvcid": "4420", 00:28:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:20.167 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:20.167 "hdgst": false, 00:28:20.167 "ddgst": false 00:28:20.167 }, 00:28:20.167 "method": "bdev_nvme_attach_controller" 00:28:20.167 },{ 00:28:20.167 "params": { 00:28:20.167 "name": "Nvme9", 00:28:20.167 "trtype": "tcp", 00:28:20.167 "traddr": "10.0.0.2", 00:28:20.167 "adrfam": "ipv4", 00:28:20.167 "trsvcid": "4420", 00:28:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:20.167 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:20.167 "hdgst": false, 00:28:20.167 "ddgst": false 00:28:20.167 }, 00:28:20.167 "method": "bdev_nvme_attach_controller" 00:28:20.167 },{ 00:28:20.167 "params": { 00:28:20.167 "name": "Nvme10", 00:28:20.167 "trtype": "tcp", 00:28:20.167 "traddr": "10.0.0.2", 00:28:20.167 "adrfam": "ipv4", 00:28:20.167 "trsvcid": "4420", 00:28:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:20.167 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:20.167 "hdgst": false, 00:28:20.167 "ddgst": false 00:28:20.167 }, 00:28:20.167 "method": "bdev_nvme_attach_controller" 00:28:20.167 }' 00:28:20.167 [2024-12-14 22:37:40.919811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.167 [2024-12-14 22:37:40.942150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.544 Running I/O for 10 seconds... 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:22.112 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 427010 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 427010 ']' 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 427010 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427010 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427010' 00:28:22.371 killing process with pid 427010 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 427010 00:28:22.371 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 427010 00:28:22.630 Received shutdown signal, test time was about 0.844455 seconds 00:28:22.630 00:28:22.630 Latency(us) 00:28:22.630 [2024-12-14T21:37:43.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.630 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme1n1 : 0.84 306.25 19.14 0.00 0.00 206610.29 16602.45 212711.13 00:28:22.630 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme2n1 : 0.84 303.39 18.96 0.00 0.00 204609.34 17101.78 196732.83 00:28:22.630 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme3n1 : 0.84 304.19 19.01 0.00 0.00 200271.48 15541.39 211712.49 00:28:22.630 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme4n1 : 0.82 319.37 19.96 0.00 0.00 185079.51 7458.62 211712.49 00:28:22.630 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme5n1 : 0.81 244.74 15.30 0.00 0.00 235757.29 3417.23 209715.20 00:28:22.630 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme6n1 : 0.84 305.38 19.09 0.00 0.00 187819.52 15791.06 212711.13 00:28:22.630 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme7n1 : 0.83 316.30 19.77 0.00 0.00 177042.91 1552.58 214708.42 00:28:22.630 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme8n1 : 0.81 242.77 15.17 0.00 0.00 223396.65 3932.16 196732.83 00:28:22.630 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme9n1 : 0.82 234.07 14.63 0.00 0.00 229346.09 17975.59 217704.35 00:28:22.630 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.630 Verification LBA range: start 0x0 length 0x400 00:28:22.630 Nvme10n1 : 0.82 233.41 14.59 0.00 0.00 224921.76 17850.76 230686.72 00:28:22.630 [2024-12-14T21:37:43.514Z] =================================================================================================================== 00:28:22.630 [2024-12-14T21:37:43.514Z] Total : 2809.86 175.62 0.00 0.00 205152.79 1552.58 230686.72 00:28:22.630 22:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:23.567 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 426853 00:28:23.567 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:23.567 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:23.567 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.826 rmmod nvme_tcp 00:28:23.826 rmmod nvme_fabrics 00:28:23.826 rmmod nvme_keyring 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 426853 ']' 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 426853 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426853 ']' 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426853 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426853 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426853' 00:28:23.826 killing process with pid 426853 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426853 00:28:23.826 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426853 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.084 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.085 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.620 00:28:26.620 real 0m7.400s 00:28:26.620 user 0m21.739s 00:28:26.620 sys 0m1.327s 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.620 ************************************ 00:28:26.620 END TEST nvmf_shutdown_tc2 00:28:26.620 ************************************ 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.620 ************************************ 00:28:26.620 START TEST nvmf_shutdown_tc3 00:28:26.620 ************************************ 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.620 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:26.621 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:26.621 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:26.621 Found net devices under 0000:af:00.0: cvl_0_0 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:26.621 Found net devices under 0000:af:00.1: cvl_0_1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:28:26.621 00:28:26.621 --- 10.0.0.2 ping statistics --- 00:28:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.621 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:28:26.621 00:28:26.621 --- 10.0.0.1 ping statistics --- 00:28:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.621 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428247 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428247 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428247 ']' 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.621 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.622 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.622 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.622 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.622 [2024-12-14 22:37:47.447064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:26.622 [2024-12-14 22:37:47.447105] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.881 [2024-12-14 22:37:47.509940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.881 [2024-12-14 22:37:47.532918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.881 [2024-12-14 22:37:47.532954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.881 [2024-12-14 22:37:47.532961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.881 [2024-12-14 22:37:47.532967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.881 [2024-12-14 22:37:47.532972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.881 [2024-12-14 22:37:47.534334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.881 [2024-12-14 22:37:47.534441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.881 [2024-12-14 22:37:47.534549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.881 [2024-12-14 22:37:47.534550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.881 [2024-12-14 22:37:47.665605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.881 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.881 Malloc1 00:28:27.140 [2024-12-14 22:37:47.773427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.140 Malloc2 00:28:27.140 Malloc3 00:28:27.140 Malloc4 00:28:27.140 Malloc5 00:28:27.140 Malloc6 00:28:27.140 Malloc7 00:28:27.400 Malloc8 00:28:27.400 Malloc9 00:28:27.400 Malloc10 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428303 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428303 /var/tmp/bdevperf.sock 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428303 ']' 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.400 { 00:28:27.400 "params": { 00:28:27.400 "name": "Nvme$subsystem", 00:28:27.400 "trtype": "$TEST_TRANSPORT", 00:28:27.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.400 "adrfam": "ipv4", 00:28:27.400 "trsvcid": "$NVMF_PORT", 00:28:27.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.400 "hdgst": ${hdgst:-false}, 00:28:27.400 "ddgst": ${ddgst:-false} 00:28:27.400 }, 00:28:27.400 "method": "bdev_nvme_attach_controller" 00:28:27.400 } 00:28:27.400 EOF 00:28:27.400 )") 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.400 { 00:28:27.400 "params": { 00:28:27.400 "name": "Nvme$subsystem", 00:28:27.400 "trtype": "$TEST_TRANSPORT", 00:28:27.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.400 "adrfam": "ipv4", 00:28:27.400 "trsvcid": "$NVMF_PORT", 00:28:27.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.400 "hdgst": ${hdgst:-false}, 00:28:27.400 "ddgst": ${ddgst:-false} 00:28:27.400 }, 00:28:27.400 "method": "bdev_nvme_attach_controller" 00:28:27.400 } 00:28:27.400 EOF 00:28:27.400 )") 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.400 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.400 { 00:28:27.400 "params": { 00:28:27.400 "name": "Nvme$subsystem", 00:28:27.400 "trtype": "$TEST_TRANSPORT", 00:28:27.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.400 "adrfam": "ipv4", 00:28:27.400 "trsvcid": "$NVMF_PORT", 00:28:27.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 [2024-12-14 22:37:48.252212] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:27.401 [2024-12-14 22:37:48.252262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428303 ] 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.401 { 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme$subsystem", 00:28:27.401 "trtype": "$TEST_TRANSPORT", 00:28:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "$NVMF_PORT", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.401 "hdgst": ${hdgst:-false}, 00:28:27.401 "ddgst": ${ddgst:-false} 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 } 00:28:27.401 EOF 00:28:27.401 )") 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:27.401 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme1", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme2", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme3", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme4", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme5", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme6", 00:28:27.401 "trtype": "tcp", 00:28:27.401 "traddr": "10.0.0.2", 00:28:27.401 "adrfam": "ipv4", 00:28:27.401 "trsvcid": "4420", 00:28:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.401 "hdgst": false, 00:28:27.401 "ddgst": false 00:28:27.401 }, 00:28:27.401 "method": "bdev_nvme_attach_controller" 00:28:27.401 },{ 00:28:27.401 "params": { 00:28:27.401 "name": "Nvme7", 00:28:27.402 "trtype": "tcp", 00:28:27.402 "traddr": "10.0.0.2", 00:28:27.402 "adrfam": "ipv4", 00:28:27.402 "trsvcid": "4420", 00:28:27.402 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.402 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.402 "hdgst": false, 00:28:27.402 "ddgst": false 00:28:27.402 }, 00:28:27.402 "method": "bdev_nvme_attach_controller" 00:28:27.402 },{ 00:28:27.402 "params": { 00:28:27.402 "name": "Nvme8", 00:28:27.402 "trtype": "tcp", 00:28:27.402 "traddr": "10.0.0.2", 00:28:27.402 "adrfam": "ipv4", 00:28:27.402 "trsvcid": "4420", 00:28:27.402 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.402 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.402 "hdgst": false, 00:28:27.402 "ddgst": false 00:28:27.402 }, 00:28:27.402 "method": "bdev_nvme_attach_controller" 00:28:27.402 },{ 00:28:27.402 "params": { 00:28:27.402 "name": "Nvme9", 00:28:27.402 "trtype": "tcp", 00:28:27.402 "traddr": "10.0.0.2", 00:28:27.402 "adrfam": "ipv4", 00:28:27.402 "trsvcid": "4420", 00:28:27.402 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.402 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.402 "hdgst": false, 00:28:27.402 "ddgst": false 00:28:27.402 }, 00:28:27.402 "method": "bdev_nvme_attach_controller" 00:28:27.402 },{ 00:28:27.402 "params": { 00:28:27.402 "name": "Nvme10", 00:28:27.402 "trtype": "tcp", 00:28:27.402 "traddr": "10.0.0.2", 00:28:27.402 "adrfam": "ipv4", 00:28:27.402 "trsvcid": "4420", 00:28:27.402 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.402 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.402 "hdgst": false, 00:28:27.402 "ddgst": false 00:28:27.402 }, 00:28:27.402 "method": "bdev_nvme_attach_controller" 00:28:27.402 }' 00:28:27.661 [2024-12-14 22:37:48.331782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.661 [2024-12-14 22:37:48.354291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.564 Running I/O for 10 seconds... 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:29.564 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.823 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.082 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:30.082 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:30.082 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.359 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428247 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428247 ']' 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428247 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428247 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428247' 00:28:30.359 killing process with pid 428247 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428247 00:28:30.359 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428247 00:28:30.359 [2024-12-14 22:37:51.081975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.359 [2024-12-14 22:37:51.082310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.082449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ab40 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.084598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b030 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.360 [2024-12-14 22:37:51.085888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.085994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.086133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b500 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.361 [2024-12-14 22:37:51.087444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.087531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b9f0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.088146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bec0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.088169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bec0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.088176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bec0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.088184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bec0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.089456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c3b0 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.362 [2024-12-14 22:37:51.090465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.090810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c730 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.363 [2024-12-14 22:37:51.091727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.091994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.092000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.092006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.092013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.092019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc00 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.092571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78d0d0 is same with the state(6) to be set 00:28:30.364 [2024-12-14 22:37:51.095608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.364 [2024-12-14 22:37:51.095919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.364 [2024-12-14 22:37:51.095927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.095942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.095949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.095957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.095964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.095972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.095979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.095987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.095993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.365 [2024-12-14 22:37:51.096531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.365 [2024-12-14 22:37:51.096539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.366 [2024-12-14 22:37:51.096546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.366 [2024-12-14 22:37:51.096560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.366 [2024-12-14 22:37:51.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.366 [2024-12-14 22:37:51.096592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.366 [2024-12-14 22:37:51.096606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:30.366 [2024-12-14 22:37:51.096739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2e30 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.096827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fda90 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.096917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.096969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.096975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1487270 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.097001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2610 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.097088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c7f90 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.097167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd870 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.097247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495420 is same with the state(6) to be set 00:28:30.366 [2024-12-14 22:37:51.097332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.366 [2024-12-14 22:37:51.097384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.366 [2024-12-14 22:37:51.097391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e2c0 is same with the state(6) to be set 00:28:30.367 [2024-12-14 22:37:51.097412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149d0b0 is same with the state(6) to be set 00:28:30.367 [2024-12-14 22:37:51.097492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.367 [2024-12-14 22:37:51.097544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.097550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491690 is same with the state(6) to be set 00:28:30.367 [2024-12-14 22:37:51.099602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.099999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.100008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.100015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.100023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.100029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.100037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.100044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.100053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.100060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.367 [2024-12-14 22:37:51.100068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.367 [2024-12-14 22:37:51.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.100084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.108980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.108991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.368 [2024-12-14 22:37:51.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.368 [2024-12-14 22:37:51.109240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:30.368 [2024-12-14 22:37:51.111592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2e30 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fda90 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1487270 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c2610 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c7f90 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fd870 (9): Bad file descriptor 00:28:30.368 [2024-12-14 22:37:51.111734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495420 (9): Bad file descriptor 00:28:30.369 [2024-12-14 22:37:51.111752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e2c0 (9): Bad file descriptor 00:28:30.369 [2024-12-14 22:37:51.111772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149d0b0 (9): Bad file descriptor 00:28:30.369 [2024-12-14 22:37:51.111792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491690 (9): Bad file descriptor 00:28:30.369 [2024-12-14 22:37:51.111911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.111927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.111956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.111975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.111985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.111996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.369 [2024-12-14 22:37:51.112738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.369 [2024-12-14 22:37:51.112749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.112984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.112993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.113248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.113257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.114983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.114995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.370 [2024-12-14 22:37:51.115165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.370 [2024-12-14 22:37:51.115176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.115981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.115992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.116002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.116014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.116023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.116034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.116045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.371 [2024-12-14 22:37:51.116056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.371 [2024-12-14 22:37:51.116066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.116217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.116318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:30.372 [2024-12-14 22:37:51.119232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:30.372 [2024-12-14 22:37:51.119272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:30.372 [2024-12-14 22:37:51.119591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-12-14 22:37:51.119610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f2e30 with addr=10.0.0.2, port=4420 00:28:30.372 [2024-12-14 22:37:51.119622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2e30 is same with the state(6) to be set 00:28:30.372 [2024-12-14 22:37:51.120165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:30.372 [2024-12-14 22:37:51.120308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-12-14 22:37:51.120324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491690 with addr=10.0.0.2, port=4420 00:28:30.372 [2024-12-14 22:37:51.120334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491690 is same with the state(6) to be set 00:28:30.372 [2024-12-14 22:37:51.120417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-12-14 22:37:51.120430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149d0b0 with addr=10.0.0.2, port=4420 00:28:30.372 [2024-12-14 22:37:51.120439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149d0b0 is same with the state(6) to be set 00:28:30.372 [2024-12-14 22:37:51.120449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2e30 (9): Bad file descriptor 00:28:30.372 [2024-12-14 22:37:51.120763] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.372 [2024-12-14 22:37:51.120812] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.372 [2024-12-14 22:37:51.120868] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.372 [2024-12-14 22:37:51.120923] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.372 [2024-12-14 22:37:51.120971] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.372 [2024-12-14 22:37:51.121284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-12-14 22:37:51.121300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fda90 with addr=10.0.0.2, port=4420 00:28:30.372 [2024-12-14 22:37:51.121309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fda90 is same with the state(6) to be set 00:28:30.372 [2024-12-14 22:37:51.121320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491690 (9): Bad file descriptor 00:28:30.372 [2024-12-14 22:37:51.121330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149d0b0 (9): Bad file descriptor 00:28:30.372 [2024-12-14 22:37:51.121339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:30.372 [2024-12-14 22:37:51.121346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:30.372 [2024-12-14 22:37:51.121355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:30.372 [2024-12-14 22:37:51.121363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:30.372 [2024-12-14 22:37:51.121409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.372 [2024-12-14 22:37:51.121672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-12-14 22:37:51.121680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.121984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.121995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.373 [2024-12-14 22:37:51.122363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-12-14 22:37:51.122370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.122463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.122472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1497be0 is same with the state(6) to be set 00:28:30.374 [2024-12-14 22:37:51.122606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fda90 (9): Bad file descriptor 00:28:30.374 [2024-12-14 22:37:51.122618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:30.374 [2024-12-14 22:37:51.122625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:30.374 [2024-12-14 22:37:51.122632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:30.374 [2024-12-14 22:37:51.122640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:30.374 [2024-12-14 22:37:51.122648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:30.374 [2024-12-14 22:37:51.122655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:30.374 [2024-12-14 22:37:51.122662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:30.374 [2024-12-14 22:37:51.122667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:30.374 [2024-12-14 22:37:51.122707] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:30.374 [2024-12-14 22:37:51.123673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:30.374 [2024-12-14 22:37:51.123698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:30.374 [2024-12-14 22:37:51.123707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:30.374 [2024-12-14 22:37:51.123714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:30.374 [2024-12-14 22:37:51.123722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:30.374 [2024-12-14 22:37:51.123772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.123987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.123994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.374 [2024-12-14 22:37:51.124214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.374 [2024-12-14 22:37:51.124224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.124795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.124804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892190 is same with the state(6) to be set 00:28:30.375 [2024-12-14 22:37:51.125792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.125805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.125816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.125823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.125832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.125842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.375 [2024-12-14 22:37:51.125851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.375 [2024-12-14 22:37:51.125859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.125991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.125999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.376 [2024-12-14 22:37:51.126421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.376 [2024-12-14 22:37:51.126427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.126826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.126835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189f2f0 is same with the state(6) to be set 00:28:30.377 [2024-12-14 22:37:51.127813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.127985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.127993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.128002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.128018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.128025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.128035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.128043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.128051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.377 [2024-12-14 22:37:51.128058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.377 [2024-12-14 22:37:51.128066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.378 [2024-12-14 22:37:51.128693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.378 [2024-12-14 22:37:51.128699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.128827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.128835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a800 is same with the state(6) to be set 00:28:30.379 [2024-12-14 22:37:51.129819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.129992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.129999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.379 [2024-12-14 22:37:51.130333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.379 [2024-12-14 22:37:51.130340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.130836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.130844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25980e0 is same with the state(6) to be set 00:28:30.380 [2024-12-14 22:37:51.131831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.380 [2024-12-14 22:37:51.131937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.380 [2024-12-14 22:37:51.131946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.131952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.131962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.131985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.131994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.381 [2024-12-14 22:37:51.132543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.381 [2024-12-14 22:37:51.132550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.382 [2024-12-14 22:37:51.132880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.382 [2024-12-14 22:37:51.132887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e5ac0 is same with the state(6) to be set 00:28:30.382 [2024-12-14 22:37:51.133842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:30.382 [2024-12-14 22:37:51.133865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:30.382 [2024-12-14 22:37:51.133876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:30.382 [2024-12-14 22:37:51.133887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:30.382 [2024-12-14 22:37:51.134009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-12-14 22:37:51.134024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495420 with addr=10.0.0.2, port=4420 00:28:30.382 [2024-12-14 22:37:51.134033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495420 is same with the state(6) to be set 00:28:30.382 [2024-12-14 22:37:51.134082] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:30.382 [2024-12-14 22:37:51.134098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495420 (9): Bad file descriptor 00:28:30.382 task offset: 28416 on job bdev=Nvme10n1 fails 00:28:30.382 00:28:30.382 Latency(us) 00:28:30.382 [2024-12-14T21:37:51.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.382 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme1n1 ended in about 0.92 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme1n1 : 0.92 208.87 13.05 69.62 0.00 227345.07 17101.78 215707.06 00:28:30.382 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme2n1 ended in about 0.91 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme2n1 : 0.91 245.35 15.33 70.10 0.00 197227.28 11234.74 207717.91 00:28:30.382 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme3n1 ended in about 0.92 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme3n1 : 0.92 208.40 13.03 69.47 0.00 220078.32 19598.38 228689.43 00:28:30.382 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme4n1 ended in about 0.92 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme4n1 : 0.92 213.36 13.34 69.32 0.00 212509.54 13107.20 218702.99 00:28:30.382 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme5n1 ended in about 0.91 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme5n1 : 0.91 210.99 13.19 70.33 0.00 209464.69 14168.26 213709.78 00:28:30.382 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme6n1 ended in about 0.93 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme6n1 : 0.93 207.50 12.97 69.17 0.00 209440.18 17975.59 212711.13 00:28:30.382 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme7n1 : 0.93 207.05 12.94 69.02 0.00 206141.20 16477.62 213709.78 00:28:30.382 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme8n1 ended in about 0.93 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme8n1 : 0.93 211.97 13.25 68.86 0.00 198872.64 15541.39 214708.42 00:28:30.382 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme9n1 ended in about 0.91 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme9n1 : 0.91 209.97 13.12 69.99 0.00 195172.27 7770.70 226692.14 00:28:30.382 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.382 Job: Nvme10n1 ended in about 0.91 seconds with error 00:28:30.382 Verification LBA range: start 0x0 length 0x400 00:28:30.382 Nvme10n1 : 0.91 211.70 13.23 70.57 0.00 189451.09 15042.07 228689.43 00:28:30.382 [2024-12-14T21:37:51.266Z] =================================================================================================================== 00:28:30.382 [2024-12-14T21:37:51.266Z] Total : 2135.17 133.45 696.44 0.00 206451.95 7770.70 228689.43 00:28:30.382 [2024-12-14 22:37:51.166164] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:30.382 [2024-12-14 22:37:51.166216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:30.382 [2024-12-14 22:37:51.166399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-12-14 22:37:51.166419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e2c0 with addr=10.0.0.2, port=4420 00:28:30.382 [2024-12-14 22:37:51.166430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e2c0 is same with the state(6) to be set 00:28:30.382 [2024-12-14 22:37:51.166632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-12-14 22:37:51.166645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1487270 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.166653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1487270 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.166751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.166763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c2610 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.166771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2610 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.166896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.166938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c7f90 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.166947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c7f90 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.168098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:30.383 [2024-12-14 22:37:51.168121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:30.383 [2024-12-14 22:37:51.168132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:30.383 [2024-12-14 22:37:51.168142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:30.383 [2024-12-14 22:37:51.168364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.168380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fd870 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.168389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd870 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.168402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e2c0 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.168413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1487270 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.168423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c2610 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.168432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c7f90 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.168440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.168447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.168456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.168465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.168501] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:30.383 [2024-12-14 22:37:51.168513] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:30.383 [2024-12-14 22:37:51.168525] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:30.383 [2024-12-14 22:37:51.168536] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:30.383 [2024-12-14 22:37:51.168743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.168759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f2e30 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.168768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2e30 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.168900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.168920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149d0b0 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.168929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149d0b0 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.169073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.169086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491690 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.169094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491690 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.169286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.169298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fda90 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.169310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fda90 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.169319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fd870 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169404] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:30.383 [2024-12-14 22:37:51.169515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2e30 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149d0b0 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491690 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fda90 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-12-14 22:37:51.169760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495420 with addr=10.0.0.2, port=4420 00:28:30.383 [2024-12-14 22:37:51.169768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495420 is same with the state(6) to be set 00:28:30.383 [2024-12-14 22:37:51.169776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:30.383 [2024-12-14 22:37:51.169909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495420 (9): Bad file descriptor 00:28:30.383 [2024-12-14 22:37:51.169934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:30.383 [2024-12-14 22:37:51.169943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:30.383 [2024-12-14 22:37:51.169950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:30.383 [2024-12-14 22:37:51.169957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:30.643 22:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428303 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428303 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428303 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.020 rmmod nvme_tcp 00:28:32.020 rmmod nvme_fabrics 00:28:32.020 rmmod nvme_keyring 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428247 ']' 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428247 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428247 ']' 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428247 00:28:32.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428247) - No such process 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428247 is not found' 00:28:32.020 Process with pid 428247 is not found 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.020 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.924 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.924 00:28:33.924 real 0m7.545s 00:28:33.924 user 0m18.683s 00:28:33.924 sys 0m1.304s 00:28:33.924 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.924 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.924 ************************************ 00:28:33.924 END TEST nvmf_shutdown_tc3 00:28:33.924 ************************************ 00:28:33.924 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:33.924 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.925 ************************************ 00:28:33.925 START TEST nvmf_shutdown_tc4 00:28:33.925 ************************************ 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:33.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:33.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:33.925 Found net devices under 0000:af:00.0: cvl_0_0 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:33.925 Found net devices under 0000:af:00.1: cvl_0_1 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.925 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.926 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.184 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.184 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.184 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.184 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.184 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.184 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.184 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:28:34.443 00:28:34.443 --- 10.0.0.2 ping statistics --- 00:28:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.443 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:34.443 00:28:34.443 --- 10.0.0.1 ping statistics --- 00:28:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.443 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=429539 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 429539 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 429539 ']' 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.443 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.443 [2024-12-14 22:37:55.188803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:34.443 [2024-12-14 22:37:55.188855] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.443 [2024-12-14 22:37:55.266591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.443 [2024-12-14 22:37:55.289447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.443 [2024-12-14 22:37:55.289485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.443 [2024-12-14 22:37:55.289496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.443 [2024-12-14 22:37:55.289502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.443 [2024-12-14 22:37:55.289507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.443 [2024-12-14 22:37:55.290956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.443 [2024-12-14 22:37:55.291065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.443 [2024-12-14 22:37:55.291176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.443 [2024-12-14 22:37:55.291177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.701 [2024-12-14 22:37:55.430866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.701 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.702 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:34.702 Malloc1 00:28:34.702 [2024-12-14 22:37:55.548529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.702 Malloc2 00:28:34.960 Malloc3 00:28:34.960 Malloc4 00:28:34.960 Malloc5 00:28:34.960 Malloc6 00:28:34.960 Malloc7 00:28:34.960 Malloc8 00:28:35.219 Malloc9 00:28:35.219 Malloc10 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=429807 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:35.219 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:35.219 [2024-12-14 22:37:56.049985] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 429539 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429539 ']' 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429539 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.497 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429539 00:28:40.497 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.497 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.497 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429539' 00:28:40.497 killing process with pid 429539 00:28:40.497 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 429539 00:28:40.497 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 429539 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 Write completed with error (sct=0, sc=8) 00:28:40.497 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.046559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with starting I/O failed: -6 00:28:40.498 the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.046761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.046775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.046782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.046794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec5a0 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.046843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.046884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.046891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.046911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.046918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.046924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eca90 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.047256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with starting I/O failed: -6 00:28:40.498 the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.047304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.047310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.047322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ecf60 is same with starting I/O failed: -6 00:28:40.498 the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.047749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.047782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 [2024-12-14 22:38:01.047788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 [2024-12-14 22:38:01.047795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 starting I/O failed: -6 00:28:40.498 [2024-12-14 22:38:01.047801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301140 is same with the state(6) to be set 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.498 starting I/O failed: -6 00:28:40.498 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 [2024-12-14 22:38:01.048476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 [2024-12-14 22:38:01.050034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.499 NVMe io qpair process completion error 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 [2024-12-14 22:38:01.053353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 starting I/O failed: -6 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.499 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.054267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.055266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.055747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 [2024-12-14 22:38:01.055770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.055779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 [2024-12-14 22:38:01.055786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 [2024-12-14 22:38:01.055792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.055799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 [2024-12-14 22:38:01.055805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 [2024-12-14 22:38:01.055811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.055818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eec40 is same with the state(6) to be set 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 Write completed with error (sct=0, sc=8) 00:28:40.500 starting I/O failed: -6 00:28:40.500 [2024-12-14 22:38:01.056220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with Write completed with error (sct=0, sc=8) 00:28:40.500 the state(6) to be set 00:28:40.500 starting I/O failed: -6 00:28:40.501 [2024-12-14 22:38:01.056243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with the state(6) to be set 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 [2024-12-14 22:38:01.056257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with starting I/O failed: -6 00:28:40.501 the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef110 is same with the state(6) to be set 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 [2024-12-14 22:38:01.056540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef5e0 is same with the state(6) to be set 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 [2024-12-14 22:38:01.056854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.501 NVMe io qpair process completion error 00:28:40.501 [2024-12-14 22:38:01.056864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 [2024-12-14 22:38:01.056922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee770 is same with the state(6) to be set 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 [2024-12-14 22:38:01.057951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 [2024-12-14 22:38:01.058844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.501 starting I/O failed: -6 00:28:40.501 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 [2024-12-14 22:38:01.059853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 [2024-12-14 22:38:01.061328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.502 NVMe io qpair process completion error 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 starting I/O failed: -6 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.502 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 [2024-12-14 22:38:01.062322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 [2024-12-14 22:38:01.063216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 [2024-12-14 22:38:01.064218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.503 Write completed with error (sct=0, sc=8) 00:28:40.503 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 [2024-12-14 22:38:01.066301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.504 NVMe io qpair process completion error 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 [2024-12-14 22:38:01.067321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 [2024-12-14 22:38:01.068176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.504 starting I/O failed: -6 00:28:40.504 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 [2024-12-14 22:38:01.069198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 [2024-12-14 22:38:01.072894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.505 NVMe io qpair process completion error 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.505 starting I/O failed: -6 00:28:40.505 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 [2024-12-14 22:38:01.073889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 [2024-12-14 22:38:01.074807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 [2024-12-14 22:38:01.075851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.506 starting I/O failed: -6 00:28:40.506 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 [2024-12-14 22:38:01.079569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.507 NVMe io qpair process completion error 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 [2024-12-14 22:38:01.080598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 [2024-12-14 22:38:01.081530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.507 Write completed with error (sct=0, sc=8) 00:28:40.507 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 [2024-12-14 22:38:01.082498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 [2024-12-14 22:38:01.084141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.508 NVMe io qpair process completion error 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 starting I/O failed: -6 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.508 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 [2024-12-14 22:38:01.085185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 [2024-12-14 22:38:01.086048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 [2024-12-14 22:38:01.087075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.509 Write completed with error (sct=0, sc=8) 00:28:40.509 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 [2024-12-14 22:38:01.089215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.510 NVMe io qpair process completion error 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 [2024-12-14 22:38:01.090289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 [2024-12-14 22:38:01.091077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 starting I/O failed: -6 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.510 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 [2024-12-14 22:38:01.092075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 [2024-12-14 22:38:01.096076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.511 NVMe io qpair process completion error 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 starting I/O failed: -6 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.511 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 [2024-12-14 22:38:01.097064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 [2024-12-14 22:38:01.097956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 [2024-12-14 22:38:01.098936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.512 Write completed with error (sct=0, sc=8) 00:28:40.512 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 Write completed with error (sct=0, sc=8) 00:28:40.513 starting I/O failed: -6 00:28:40.513 [2024-12-14 22:38:01.103066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.513 NVMe io qpair process completion error 00:28:40.513 Initializing NVMe Controllers 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:40.513 Controller IO queue size 128, less than required. 00:28:40.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:40.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:40.513 Initialization complete. Launching workers. 00:28:40.513 ======================================================== 00:28:40.513 Latency(us) 00:28:40.513 Device Information : IOPS MiB/s Average min max 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2188.39 94.03 58496.85 840.58 106447.11 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2198.10 94.45 58248.36 921.90 104985.07 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2182.56 93.78 58681.37 888.59 103296.69 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2160.33 92.83 59324.76 851.44 105463.79 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2158.61 92.75 59410.94 925.55 110051.32 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2185.37 93.90 58697.98 737.77 112355.71 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2282.70 98.08 56211.79 795.16 96998.48 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2254.00 96.85 56349.44 984.68 96952.31 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2229.83 95.81 57544.43 883.32 96205.53 00:28:40.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2189.04 94.06 58016.97 693.30 95528.71 00:28:40.513 ======================================================== 00:28:40.513 Total : 22028.94 946.56 58080.45 693.30 112355.71 00:28:40.513 00:28:40.513 [2024-12-14 22:38:01.106045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c6d0 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308ff0 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130df00 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cd30 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d060 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d390 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c3a0 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c070 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d6c0 is same with the state(6) to be set 00:28:40.513 [2024-12-14 22:38:01.106327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ca00 is same with the state(6) to be set 00:28:40.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:40.772 22:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 429807 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429807 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 429807 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.709 rmmod nvme_tcp 00:28:41.709 rmmod nvme_fabrics 00:28:41.709 rmmod nvme_keyring 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 429539 ']' 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 429539 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429539 ']' 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429539 00:28:41.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (429539) - No such process 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429539 is not found' 00:28:41.709 Process with pid 429539 is not found 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.709 22:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.245 00:28:44.245 real 0m9.862s 00:28:44.245 user 0m24.958s 00:28:44.245 sys 0m5.124s 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:44.245 ************************************ 00:28:44.245 END TEST nvmf_shutdown_tc4 00:28:44.245 ************************************ 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:44.245 00:28:44.245 real 0m40.515s 00:28:44.245 user 1m39.574s 00:28:44.245 sys 0m13.802s 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.245 ************************************ 00:28:44.245 END TEST nvmf_shutdown 00:28:44.245 ************************************ 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:44.245 ************************************ 00:28:44.245 START TEST nvmf_nsid 00:28:44.245 ************************************ 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:44.245 * Looking for test storage... 00:28:44.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:44.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.245 --rc genhtml_branch_coverage=1 00:28:44.245 --rc genhtml_function_coverage=1 00:28:44.245 --rc genhtml_legend=1 00:28:44.245 --rc geninfo_all_blocks=1 00:28:44.245 --rc geninfo_unexecuted_blocks=1 00:28:44.245 00:28:44.245 ' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:44.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.245 --rc genhtml_branch_coverage=1 00:28:44.245 --rc genhtml_function_coverage=1 00:28:44.245 --rc genhtml_legend=1 00:28:44.245 --rc geninfo_all_blocks=1 00:28:44.245 --rc geninfo_unexecuted_blocks=1 00:28:44.245 00:28:44.245 ' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:44.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.245 --rc genhtml_branch_coverage=1 00:28:44.245 --rc genhtml_function_coverage=1 00:28:44.245 --rc genhtml_legend=1 00:28:44.245 --rc geninfo_all_blocks=1 00:28:44.245 --rc geninfo_unexecuted_blocks=1 00:28:44.245 00:28:44.245 ' 00:28:44.245 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:44.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.246 --rc genhtml_branch_coverage=1 00:28:44.246 --rc genhtml_function_coverage=1 00:28:44.246 --rc genhtml_legend=1 00:28:44.246 --rc geninfo_all_blocks=1 00:28:44.246 --rc geninfo_unexecuted_blocks=1 00:28:44.246 00:28:44.246 ' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.246 22:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.817 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.817 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.817 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.818 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.818 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:28:50.818 00:28:50.818 --- 10.0.0.2 ping statistics --- 00:28:50.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.818 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:50.818 00:28:50.818 --- 10.0.0.1 ping statistics --- 00:28:50.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.818 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=434176 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 434176 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434176 ']' 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.818 22:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.818 [2024-12-14 22:38:10.826314] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:50.818 [2024-12-14 22:38:10.826367] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.818 [2024-12-14 22:38:10.906096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.818 [2024-12-14 22:38:10.928120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.818 [2024-12-14 22:38:10.928156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.818 [2024-12-14 22:38:10.928164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.818 [2024-12-14 22:38:10.928172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.818 [2024-12-14 22:38:10.928177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.818 [2024-12-14 22:38:10.928659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=434205 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=11ae4abc-7d0c-4934-9349-f3f774843f40 00:28:50.818 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e23c2669-d34f-4293-8ab5-f2d56f4da519 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d2200956-aeec-45fc-a234-dc3011c5dd76 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.819 null0 00:28:50.819 null1 00:28:50.819 [2024-12-14 22:38:11.115338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:50.819 [2024-12-14 22:38:11.115386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434205 ] 00:28:50.819 null2 00:28:50.819 [2024-12-14 22:38:11.123155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.819 [2024-12-14 22:38:11.147333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 434205 /var/tmp/tgt2.sock 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434205 ']' 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:50.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:50.819 [2024-12-14 22:38:11.189125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.819 [2024-12-14 22:38:11.211131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:50.819 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:51.078 [2024-12-14 22:38:11.732723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.078 [2024-12-14 22:38:11.748803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:51.078 nvme0n1 nvme0n2 00:28:51.078 nvme1n1 00:28:51.078 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:51.078 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:51.078 22:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:52.016 22:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 11ae4abc-7d0c-4934-9349-f3f774843f40 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=11ae4abc7d0c49349349f3f774843f40 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 11AE4ABC7D0C49349349F3F774843F40 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 11AE4ABC7D0C49349349F3F774843F40 == \1\1\A\E\4\A\B\C\7\D\0\C\4\9\3\4\9\3\4\9\F\3\F\7\7\4\8\4\3\F\4\0 ]] 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e23c2669-d34f-4293-8ab5-f2d56f4da519 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:53.393 22:38:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e23c2669d34f42938ab5f2d56f4da519 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E23C2669D34F42938AB5F2D56F4DA519 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E23C2669D34F42938AB5F2D56F4DA519 == \E\2\3\C\2\6\6\9\D\3\4\F\4\2\9\3\8\A\B\5\F\2\D\5\6\F\4\D\A\5\1\9 ]] 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d2200956-aeec-45fc-a234-dc3011c5dd76 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d2200956aeec45fca234dc3011c5dd76 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D2200956AEEC45FCA234DC3011C5DD76 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D2200956AEEC45FCA234DC3011C5DD76 == \D\2\2\0\0\9\5\6\A\E\E\C\4\5\F\C\A\2\3\4\D\C\3\0\1\1\C\5\D\D\7\6 ]] 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 434205 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434205 ']' 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434205 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:53.393 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434205 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434205' 00:28:53.652 killing process with pid 434205 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434205 00:28:53.652 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434205 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.912 rmmod nvme_tcp 00:28:53.912 rmmod nvme_fabrics 00:28:53.912 rmmod nvme_keyring 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 434176 ']' 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 434176 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434176 ']' 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434176 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434176 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434176' 00:28:53.912 killing process with pid 434176 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434176 00:28:53.912 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434176 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.171 22:38:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.706 22:38:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.706 00:28:56.706 real 0m12.295s 00:28:56.706 user 0m9.582s 00:28:56.706 sys 0m5.474s 00:28:56.706 22:38:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.706 22:38:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.706 ************************************ 00:28:56.706 END TEST nvmf_nsid 00:28:56.706 ************************************ 00:28:56.706 22:38:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:56.706 00:28:56.706 real 18m36.219s 00:28:56.706 user 49m17.100s 00:28:56.706 sys 4m35.949s 00:28:56.706 22:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.706 22:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:56.706 ************************************ 00:28:56.706 END TEST nvmf_target_extra 00:28:56.706 ************************************ 00:28:56.706 22:38:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:56.706 22:38:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:56.706 22:38:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.706 22:38:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.706 ************************************ 00:28:56.706 START TEST nvmf_host 00:28:56.706 ************************************ 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:56.706 * Looking for test storage... 00:28:56.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.706 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.707 ************************************ 00:28:56.707 START TEST nvmf_multicontroller 00:28:56.707 ************************************ 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:56.707 * Looking for test storage... 00:28:56.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.707 --rc genhtml_branch_coverage=1 00:28:56.707 --rc genhtml_function_coverage=1 00:28:56.707 --rc genhtml_legend=1 00:28:56.707 --rc geninfo_all_blocks=1 00:28:56.707 --rc geninfo_unexecuted_blocks=1 00:28:56.707 00:28:56.707 ' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.707 --rc genhtml_branch_coverage=1 00:28:56.707 --rc genhtml_function_coverage=1 00:28:56.707 --rc genhtml_legend=1 00:28:56.707 --rc geninfo_all_blocks=1 00:28:56.707 --rc geninfo_unexecuted_blocks=1 00:28:56.707 00:28:56.707 ' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.707 --rc genhtml_branch_coverage=1 00:28:56.707 --rc genhtml_function_coverage=1 00:28:56.707 --rc genhtml_legend=1 00:28:56.707 --rc geninfo_all_blocks=1 00:28:56.707 --rc geninfo_unexecuted_blocks=1 00:28:56.707 00:28:56.707 ' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.707 --rc genhtml_branch_coverage=1 00:28:56.707 --rc genhtml_function_coverage=1 00:28:56.707 --rc genhtml_legend=1 00:28:56.707 --rc geninfo_all_blocks=1 00:28:56.707 --rc geninfo_unexecuted_blocks=1 00:28:56.707 00:28:56.707 ' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.707 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.708 22:38:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.277 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:03.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:03.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:03.278 Found net devices under 0000:af:00.0: cvl_0_0 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:03.278 Found net devices under 0000:af:00.1: cvl_0_1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:29:03.278 00:29:03.278 --- 10.0.0.2 ping statistics --- 00:29:03.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.278 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:29:03.278 00:29:03.278 --- 10.0.0.1 ping statistics --- 00:29:03.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.278 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.278 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=438428 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 438428 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438428 ']' 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 [2024-12-14 22:38:23.488592] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:03.279 [2024-12-14 22:38:23.488647] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.279 [2024-12-14 22:38:23.566449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.279 [2024-12-14 22:38:23.589036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.279 [2024-12-14 22:38:23.589076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.279 [2024-12-14 22:38:23.589085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.279 [2024-12-14 22:38:23.589092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.279 [2024-12-14 22:38:23.589098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.279 [2024-12-14 22:38:23.590370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.279 [2024-12-14 22:38:23.590482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.279 [2024-12-14 22:38:23.590483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 [2024-12-14 22:38:23.729072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 Malloc0 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 [2024-12-14 22:38:23.792924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 [2024-12-14 22:38:23.804873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 Malloc1 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=438455 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 438455 /var/tmp/bdevperf.sock 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438455 ']' 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.279 22:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.279 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:03.279 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:03.279 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.279 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.279 NVMe0n1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.539 1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.539 request: 00:29:03.539 { 00:29:03.539 "name": "NVMe0", 00:29:03.539 "trtype": "tcp", 00:29:03.539 "traddr": "10.0.0.2", 00:29:03.539 "adrfam": "ipv4", 00:29:03.539 "trsvcid": "4420", 00:29:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.539 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:03.539 "hostaddr": "10.0.0.1", 00:29:03.539 "prchk_reftag": false, 00:29:03.539 "prchk_guard": false, 00:29:03.539 "hdgst": false, 00:29:03.539 "ddgst": false, 00:29:03.539 "allow_unrecognized_csi": false, 00:29:03.539 "method": "bdev_nvme_attach_controller", 00:29:03.539 "req_id": 1 00:29:03.539 } 00:29:03.539 Got JSON-RPC error response 00:29:03.539 response: 00:29:03.539 { 00:29:03.539 "code": -114, 00:29:03.539 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.539 } 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.539 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.539 request: 00:29:03.539 { 00:29:03.539 "name": "NVMe0", 00:29:03.539 "trtype": "tcp", 00:29:03.539 "traddr": "10.0.0.2", 00:29:03.540 "adrfam": "ipv4", 00:29:03.540 "trsvcid": "4420", 00:29:03.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:03.540 "hostaddr": "10.0.0.1", 00:29:03.540 "prchk_reftag": false, 00:29:03.540 "prchk_guard": false, 00:29:03.540 "hdgst": false, 00:29:03.540 "ddgst": false, 00:29:03.540 "allow_unrecognized_csi": false, 00:29:03.540 "method": "bdev_nvme_attach_controller", 00:29:03.540 "req_id": 1 00:29:03.540 } 00:29:03.540 Got JSON-RPC error response 00:29:03.540 response: 00:29:03.540 { 00:29:03.540 "code": -114, 00:29:03.540 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.540 } 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.540 request: 00:29:03.540 { 00:29:03.540 "name": "NVMe0", 00:29:03.540 "trtype": "tcp", 00:29:03.540 "traddr": "10.0.0.2", 00:29:03.540 "adrfam": "ipv4", 00:29:03.540 "trsvcid": "4420", 00:29:03.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.540 "hostaddr": "10.0.0.1", 00:29:03.540 "prchk_reftag": false, 00:29:03.540 "prchk_guard": false, 00:29:03.540 "hdgst": false, 00:29:03.540 "ddgst": false, 00:29:03.540 "multipath": "disable", 00:29:03.540 "allow_unrecognized_csi": false, 00:29:03.540 "method": "bdev_nvme_attach_controller", 00:29:03.540 "req_id": 1 00:29:03.540 } 00:29:03.540 Got JSON-RPC error response 00:29:03.540 response: 00:29:03.540 { 00:29:03.540 "code": -114, 00:29:03.540 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:03.540 } 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.540 request: 00:29:03.540 { 00:29:03.540 "name": "NVMe0", 00:29:03.540 "trtype": "tcp", 00:29:03.540 "traddr": "10.0.0.2", 00:29:03.540 "adrfam": "ipv4", 00:29:03.540 "trsvcid": "4420", 00:29:03.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.540 "hostaddr": "10.0.0.1", 00:29:03.540 "prchk_reftag": false, 00:29:03.540 "prchk_guard": false, 00:29:03.540 "hdgst": false, 00:29:03.540 "ddgst": false, 00:29:03.540 "multipath": "failover", 00:29:03.540 "allow_unrecognized_csi": false, 00:29:03.540 "method": "bdev_nvme_attach_controller", 00:29:03.540 "req_id": 1 00:29:03.540 } 00:29:03.540 Got JSON-RPC error response 00:29:03.540 response: 00:29:03.540 { 00:29:03.540 "code": -114, 00:29:03.540 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.540 } 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.540 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.799 NVMe0n1 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.799 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.799 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:03.800 22:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:05.176 { 00:29:05.176 "results": [ 00:29:05.176 { 00:29:05.176 "job": "NVMe0n1", 00:29:05.176 "core_mask": "0x1", 00:29:05.176 "workload": "write", 00:29:05.176 "status": "finished", 00:29:05.176 "queue_depth": 128, 00:29:05.176 "io_size": 4096, 00:29:05.176 "runtime": 1.006653, 00:29:05.176 "iops": 25068.221124856333, 00:29:05.176 "mibps": 97.92273876897005, 00:29:05.176 "io_failed": 0, 00:29:05.176 "io_timeout": 0, 00:29:05.176 "avg_latency_us": 5096.738127053318, 00:29:05.176 "min_latency_us": 3058.346666666667, 00:29:05.176 "max_latency_us": 15354.148571428572 00:29:05.176 } 00:29:05.176 ], 00:29:05.176 "core_count": 1 00:29:05.176 } 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438455 ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438455' 00:29:05.176 killing process with pid 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438455 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:05.176 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.176 [2024-12-14 22:38:23.910268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:05.176 [2024-12-14 22:38:23.910316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438455 ] 00:29:05.176 [2024-12-14 22:38:23.981923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.176 [2024-12-14 22:38:24.004502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.176 [2024-12-14 22:38:24.529145] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 95961de3-809f-4452-8ac6-3f94a6beca7c already exists 00:29:05.176 [2024-12-14 22:38:24.529173] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:95961de3-809f-4452-8ac6-3f94a6beca7c alias for bdev NVMe1n1 00:29:05.176 [2024-12-14 22:38:24.529181] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:05.176 Running I/O for 1 seconds... 00:29:05.176 25014.00 IOPS, 97.71 MiB/s 00:29:05.176 Latency(us) 00:29:05.176 [2024-12-14T21:38:26.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.176 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:05.176 NVMe0n1 : 1.01 25068.22 97.92 0.00 0.00 5096.74 3058.35 15354.15 00:29:05.176 [2024-12-14T21:38:26.060Z] =================================================================================================================== 00:29:05.176 [2024-12-14T21:38:26.060Z] Total : 25068.22 97.92 0.00 0.00 5096.74 3058.35 15354.15 00:29:05.176 Received shutdown signal, test time was about 1.000000 seconds 00:29:05.176 00:29:05.176 Latency(us) 00:29:05.176 [2024-12-14T21:38:26.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.176 [2024-12-14T21:38:26.060Z] =================================================================================================================== 00:29:05.176 [2024-12-14T21:38:26.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.176 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.176 rmmod nvme_tcp 00:29:05.176 rmmod nvme_fabrics 00:29:05.176 rmmod nvme_keyring 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 438428 ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 438428 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438428 ']' 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438428 00:29:05.176 22:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438428 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438428' 00:29:05.176 killing process with pid 438428 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438428 00:29:05.176 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438428 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.435 22:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.971 00:29:07.971 real 0m10.968s 00:29:07.971 user 0m11.762s 00:29:07.971 sys 0m5.186s 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.971 ************************************ 00:29:07.971 END TEST nvmf_multicontroller 00:29:07.971 ************************************ 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.971 ************************************ 00:29:07.971 START TEST nvmf_aer 00:29:07.971 ************************************ 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:07.971 * Looking for test storage... 00:29:07.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:07.971 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.972 --rc genhtml_branch_coverage=1 00:29:07.972 --rc genhtml_function_coverage=1 00:29:07.972 --rc genhtml_legend=1 00:29:07.972 --rc geninfo_all_blocks=1 00:29:07.972 --rc geninfo_unexecuted_blocks=1 00:29:07.972 00:29:07.972 ' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.972 --rc genhtml_branch_coverage=1 00:29:07.972 --rc genhtml_function_coverage=1 00:29:07.972 --rc genhtml_legend=1 00:29:07.972 --rc geninfo_all_blocks=1 00:29:07.972 --rc geninfo_unexecuted_blocks=1 00:29:07.972 00:29:07.972 ' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.972 --rc genhtml_branch_coverage=1 00:29:07.972 --rc genhtml_function_coverage=1 00:29:07.972 --rc genhtml_legend=1 00:29:07.972 --rc geninfo_all_blocks=1 00:29:07.972 --rc geninfo_unexecuted_blocks=1 00:29:07.972 00:29:07.972 ' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.972 --rc genhtml_branch_coverage=1 00:29:07.972 --rc genhtml_function_coverage=1 00:29:07.972 --rc genhtml_legend=1 00:29:07.972 --rc geninfo_all_blocks=1 00:29:07.972 --rc geninfo_unexecuted_blocks=1 00:29:07.972 00:29:07.972 ' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.972 22:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:14.543 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:14.543 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:14.543 Found net devices under 0000:af:00.0: cvl_0_0 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.543 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:14.544 Found net devices under 0000:af:00.1: cvl_0_1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:29:14.544 00:29:14.544 --- 10.0.0.2 ping statistics --- 00:29:14.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.544 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:29:14.544 00:29:14.544 --- 10.0.0.1 ping statistics --- 00:29:14.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.544 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=442292 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 442292 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 442292 ']' 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 [2024-12-14 22:38:34.483878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:14.544 [2024-12-14 22:38:34.483930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.544 [2024-12-14 22:38:34.564272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.544 [2024-12-14 22:38:34.587425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.544 [2024-12-14 22:38:34.587463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.544 [2024-12-14 22:38:34.587470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.544 [2024-12-14 22:38:34.587476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.544 [2024-12-14 22:38:34.587481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.544 [2024-12-14 22:38:34.588867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.544 [2024-12-14 22:38:34.588976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.544 [2024-12-14 22:38:34.589084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.544 [2024-12-14 22:38:34.589086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 [2024-12-14 22:38:34.720708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 Malloc0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 [2024-12-14 22:38:34.779305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.544 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.544 [ 00:29:14.544 { 00:29:14.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:14.544 "subtype": "Discovery", 00:29:14.544 "listen_addresses": [], 00:29:14.544 "allow_any_host": true, 00:29:14.544 "hosts": [] 00:29:14.544 }, 00:29:14.544 { 00:29:14.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.544 "subtype": "NVMe", 00:29:14.544 "listen_addresses": [ 00:29:14.545 { 00:29:14.545 "trtype": "TCP", 00:29:14.545 "adrfam": "IPv4", 00:29:14.545 "traddr": "10.0.0.2", 00:29:14.545 "trsvcid": "4420" 00:29:14.545 } 00:29:14.545 ], 00:29:14.545 "allow_any_host": true, 00:29:14.545 "hosts": [], 00:29:14.545 "serial_number": "SPDK00000000000001", 00:29:14.545 "model_number": "SPDK bdev Controller", 00:29:14.545 "max_namespaces": 2, 00:29:14.545 "min_cntlid": 1, 00:29:14.545 "max_cntlid": 65519, 00:29:14.545 "namespaces": [ 00:29:14.545 { 00:29:14.545 "nsid": 1, 00:29:14.545 "bdev_name": "Malloc0", 00:29:14.545 "name": "Malloc0", 00:29:14.545 "nguid": "A5C1F5504A15481AB6CDEDB2B177A2BA", 00:29:14.545 "uuid": "a5c1f550-4a15-481a-b6cd-edb2b177a2ba" 00:29:14.545 } 00:29:14.545 ] 00:29:14.545 } 00:29:14.545 ] 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=442393 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:14.545 22:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 Malloc1 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 Asynchronous Event Request test 00:29:14.545 Attaching to 10.0.0.2 00:29:14.545 Attached to 10.0.0.2 00:29:14.545 Registering asynchronous event callbacks... 00:29:14.545 Starting namespace attribute notice tests for all controllers... 00:29:14.545 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:14.545 aer_cb - Changed Namespace 00:29:14.545 Cleaning up... 00:29:14.545 [ 00:29:14.545 { 00:29:14.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:14.545 "subtype": "Discovery", 00:29:14.545 "listen_addresses": [], 00:29:14.545 "allow_any_host": true, 00:29:14.545 "hosts": [] 00:29:14.545 }, 00:29:14.545 { 00:29:14.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.545 "subtype": "NVMe", 00:29:14.545 "listen_addresses": [ 00:29:14.545 { 00:29:14.545 "trtype": "TCP", 00:29:14.545 "adrfam": "IPv4", 00:29:14.545 "traddr": "10.0.0.2", 00:29:14.545 "trsvcid": "4420" 00:29:14.545 } 00:29:14.545 ], 00:29:14.545 "allow_any_host": true, 00:29:14.545 "hosts": [], 00:29:14.545 "serial_number": "SPDK00000000000001", 00:29:14.545 "model_number": "SPDK bdev Controller", 00:29:14.545 "max_namespaces": 2, 00:29:14.545 "min_cntlid": 1, 00:29:14.545 "max_cntlid": 65519, 00:29:14.545 "namespaces": [ 00:29:14.545 { 00:29:14.545 "nsid": 1, 00:29:14.545 "bdev_name": "Malloc0", 00:29:14.545 "name": "Malloc0", 00:29:14.545 "nguid": "A5C1F5504A15481AB6CDEDB2B177A2BA", 00:29:14.545 "uuid": "a5c1f550-4a15-481a-b6cd-edb2b177a2ba" 00:29:14.545 }, 00:29:14.545 { 00:29:14.545 "nsid": 2, 00:29:14.545 "bdev_name": "Malloc1", 00:29:14.545 "name": "Malloc1", 00:29:14.545 "nguid": "4DB17BE2D2C2467DAE8FC198B5C7B573", 00:29:14.545 "uuid": "4db17be2-d2c2-467d-ae8f-c198b5c7b573" 00:29:14.545 } 00:29:14.545 ] 00:29:14.545 } 00:29:14.545 ] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 442393 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.545 rmmod nvme_tcp 00:29:14.545 rmmod nvme_fabrics 00:29:14.545 rmmod nvme_keyring 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 442292 ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 442292 ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442292' 00:29:14.545 killing process with pid 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 442292 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.545 22:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.082 00:29:17.082 real 0m9.107s 00:29:17.082 user 0m5.032s 00:29:17.082 sys 0m4.766s 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.082 ************************************ 00:29:17.082 END TEST nvmf_aer 00:29:17.082 ************************************ 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.082 ************************************ 00:29:17.082 START TEST nvmf_async_init 00:29:17.082 ************************************ 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:17.082 * Looking for test storage... 00:29:17.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.082 --rc genhtml_branch_coverage=1 00:29:17.082 --rc genhtml_function_coverage=1 00:29:17.082 --rc genhtml_legend=1 00:29:17.082 --rc geninfo_all_blocks=1 00:29:17.082 --rc geninfo_unexecuted_blocks=1 00:29:17.082 00:29:17.082 ' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.082 --rc genhtml_branch_coverage=1 00:29:17.082 --rc genhtml_function_coverage=1 00:29:17.082 --rc genhtml_legend=1 00:29:17.082 --rc geninfo_all_blocks=1 00:29:17.082 --rc geninfo_unexecuted_blocks=1 00:29:17.082 00:29:17.082 ' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.082 --rc genhtml_branch_coverage=1 00:29:17.082 --rc genhtml_function_coverage=1 00:29:17.082 --rc genhtml_legend=1 00:29:17.082 --rc geninfo_all_blocks=1 00:29:17.082 --rc geninfo_unexecuted_blocks=1 00:29:17.082 00:29:17.082 ' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.082 --rc genhtml_branch_coverage=1 00:29:17.082 --rc genhtml_function_coverage=1 00:29:17.082 --rc genhtml_legend=1 00:29:17.082 --rc geninfo_all_blocks=1 00:29:17.082 --rc geninfo_unexecuted_blocks=1 00:29:17.082 00:29:17.082 ' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.082 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=61e3422c767b4f5cb646f051f8c9933d 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.083 22:38:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.655 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:23.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:23.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:23.656 Found net devices under 0000:af:00.0: cvl_0_0 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:23.656 Found net devices under 0000:af:00.1: cvl_0_1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:29:23.656 00:29:23.656 --- 10.0.0.2 ping statistics --- 00:29:23.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.656 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:29:23.656 00:29:23.656 --- 10.0.0.1 ping statistics --- 00:29:23.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.656 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.656 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=445865 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 445865 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 445865 ']' 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [2024-12-14 22:38:43.697003] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:23.657 [2024-12-14 22:38:43.697048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.657 [2024-12-14 22:38:43.774140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.657 [2024-12-14 22:38:43.795287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.657 [2024-12-14 22:38:43.795324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.657 [2024-12-14 22:38:43.795330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.657 [2024-12-14 22:38:43.795336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.657 [2024-12-14 22:38:43.795341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.657 [2024-12-14 22:38:43.795820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [2024-12-14 22:38:43.926723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 null0 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 61e3422c767b4f5cb646f051f8c9933d 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [2024-12-14 22:38:43.978979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 nvme0n1 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [ 00:29:23.657 { 00:29:23.657 "name": "nvme0n1", 00:29:23.657 "aliases": [ 00:29:23.657 "61e3422c-767b-4f5c-b646-f051f8c9933d" 00:29:23.657 ], 00:29:23.657 "product_name": "NVMe disk", 00:29:23.657 "block_size": 512, 00:29:23.657 "num_blocks": 2097152, 00:29:23.657 "uuid": "61e3422c-767b-4f5c-b646-f051f8c9933d", 00:29:23.657 "numa_id": 1, 00:29:23.657 "assigned_rate_limits": { 00:29:23.657 "rw_ios_per_sec": 0, 00:29:23.657 "rw_mbytes_per_sec": 0, 00:29:23.657 "r_mbytes_per_sec": 0, 00:29:23.657 "w_mbytes_per_sec": 0 00:29:23.657 }, 00:29:23.657 "claimed": false, 00:29:23.657 "zoned": false, 00:29:23.657 "supported_io_types": { 00:29:23.657 "read": true, 00:29:23.657 "write": true, 00:29:23.657 "unmap": false, 00:29:23.657 "flush": true, 00:29:23.657 "reset": true, 00:29:23.657 "nvme_admin": true, 00:29:23.657 "nvme_io": true, 00:29:23.657 "nvme_io_md": false, 00:29:23.657 "write_zeroes": true, 00:29:23.657 "zcopy": false, 00:29:23.657 "get_zone_info": false, 00:29:23.657 "zone_management": false, 00:29:23.657 "zone_append": false, 00:29:23.657 "compare": true, 00:29:23.657 "compare_and_write": true, 00:29:23.657 "abort": true, 00:29:23.657 "seek_hole": false, 00:29:23.657 "seek_data": false, 00:29:23.657 "copy": true, 00:29:23.657 "nvme_iov_md": false 00:29:23.657 }, 00:29:23.657 "memory_domains": [ 00:29:23.657 { 00:29:23.657 "dma_device_id": "system", 00:29:23.657 "dma_device_type": 1 00:29:23.657 } 00:29:23.657 ], 00:29:23.657 "driver_specific": { 00:29:23.657 "nvme": [ 00:29:23.657 { 00:29:23.657 "trid": { 00:29:23.657 "trtype": "TCP", 00:29:23.657 "adrfam": "IPv4", 00:29:23.657 "traddr": "10.0.0.2", 00:29:23.657 "trsvcid": "4420", 00:29:23.657 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:23.657 }, 00:29:23.657 "ctrlr_data": { 00:29:23.657 "cntlid": 1, 00:29:23.657 "vendor_id": "0x8086", 00:29:23.657 "model_number": "SPDK bdev Controller", 00:29:23.657 "serial_number": "00000000000000000000", 00:29:23.657 "firmware_revision": "25.01", 00:29:23.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.657 "oacs": { 00:29:23.657 "security": 0, 00:29:23.657 "format": 0, 00:29:23.657 "firmware": 0, 00:29:23.657 "ns_manage": 0 00:29:23.657 }, 00:29:23.657 "multi_ctrlr": true, 00:29:23.657 "ana_reporting": false 00:29:23.657 }, 00:29:23.657 "vs": { 00:29:23.657 "nvme_version": "1.3" 00:29:23.657 }, 00:29:23.657 "ns_data": { 00:29:23.657 "id": 1, 00:29:23.657 "can_share": true 00:29:23.657 } 00:29:23.657 } 00:29:23.657 ], 00:29:23.657 "mp_policy": "active_passive" 00:29:23.657 } 00:29:23.657 } 00:29:23.657 ] 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [2024-12-14 22:38:44.247531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:23.657 [2024-12-14 22:38:44.247588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adb230 (9): Bad file descriptor 00:29:23.657 [2024-12-14 22:38:44.379979] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.657 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.657 [ 00:29:23.657 { 00:29:23.657 "name": "nvme0n1", 00:29:23.657 "aliases": [ 00:29:23.657 "61e3422c-767b-4f5c-b646-f051f8c9933d" 00:29:23.657 ], 00:29:23.657 "product_name": "NVMe disk", 00:29:23.657 "block_size": 512, 00:29:23.657 "num_blocks": 2097152, 00:29:23.657 "uuid": "61e3422c-767b-4f5c-b646-f051f8c9933d", 00:29:23.657 "numa_id": 1, 00:29:23.657 "assigned_rate_limits": { 00:29:23.657 "rw_ios_per_sec": 0, 00:29:23.657 "rw_mbytes_per_sec": 0, 00:29:23.657 "r_mbytes_per_sec": 0, 00:29:23.657 "w_mbytes_per_sec": 0 00:29:23.657 }, 00:29:23.657 "claimed": false, 00:29:23.657 "zoned": false, 00:29:23.657 "supported_io_types": { 00:29:23.657 "read": true, 00:29:23.658 "write": true, 00:29:23.658 "unmap": false, 00:29:23.658 "flush": true, 00:29:23.658 "reset": true, 00:29:23.658 "nvme_admin": true, 00:29:23.658 "nvme_io": true, 00:29:23.658 "nvme_io_md": false, 00:29:23.658 "write_zeroes": true, 00:29:23.658 "zcopy": false, 00:29:23.658 "get_zone_info": false, 00:29:23.658 "zone_management": false, 00:29:23.658 "zone_append": false, 00:29:23.658 "compare": true, 00:29:23.658 "compare_and_write": true, 00:29:23.658 "abort": true, 00:29:23.658 "seek_hole": false, 00:29:23.658 "seek_data": false, 00:29:23.658 "copy": true, 00:29:23.658 "nvme_iov_md": false 00:29:23.658 }, 00:29:23.658 "memory_domains": [ 00:29:23.658 { 00:29:23.658 "dma_device_id": "system", 00:29:23.658 "dma_device_type": 1 00:29:23.658 } 00:29:23.658 ], 00:29:23.658 "driver_specific": { 00:29:23.658 "nvme": [ 00:29:23.658 { 00:29:23.658 "trid": { 00:29:23.658 "trtype": "TCP", 00:29:23.658 "adrfam": "IPv4", 00:29:23.658 "traddr": "10.0.0.2", 00:29:23.658 "trsvcid": "4420", 00:29:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:23.658 }, 00:29:23.658 "ctrlr_data": { 00:29:23.658 "cntlid": 2, 00:29:23.658 "vendor_id": "0x8086", 00:29:23.658 "model_number": "SPDK bdev Controller", 00:29:23.658 "serial_number": "00000000000000000000", 00:29:23.658 "firmware_revision": "25.01", 00:29:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.658 "oacs": { 00:29:23.658 "security": 0, 00:29:23.658 "format": 0, 00:29:23.658 "firmware": 0, 00:29:23.658 "ns_manage": 0 00:29:23.658 }, 00:29:23.658 "multi_ctrlr": true, 00:29:23.658 "ana_reporting": false 00:29:23.658 }, 00:29:23.658 "vs": { 00:29:23.658 "nvme_version": "1.3" 00:29:23.658 }, 00:29:23.658 "ns_data": { 00:29:23.658 "id": 1, 00:29:23.658 "can_share": true 00:29:23.658 } 00:29:23.658 } 00:29:23.658 ], 00:29:23.658 "mp_policy": "active_passive" 00:29:23.658 } 00:29:23.658 } 00:29:23.658 ] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KGy359o9Qa 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KGy359o9Qa 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KGy359o9Qa 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 [2024-12-14 22:38:44.456151] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:23.658 [2024-12-14 22:38:44.456240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.658 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.658 [2024-12-14 22:38:44.476217] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:23.917 nvme0n1 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.918 [ 00:29:23.918 { 00:29:23.918 "name": "nvme0n1", 00:29:23.918 "aliases": [ 00:29:23.918 "61e3422c-767b-4f5c-b646-f051f8c9933d" 00:29:23.918 ], 00:29:23.918 "product_name": "NVMe disk", 00:29:23.918 "block_size": 512, 00:29:23.918 "num_blocks": 2097152, 00:29:23.918 "uuid": "61e3422c-767b-4f5c-b646-f051f8c9933d", 00:29:23.918 "numa_id": 1, 00:29:23.918 "assigned_rate_limits": { 00:29:23.918 "rw_ios_per_sec": 0, 00:29:23.918 "rw_mbytes_per_sec": 0, 00:29:23.918 "r_mbytes_per_sec": 0, 00:29:23.918 "w_mbytes_per_sec": 0 00:29:23.918 }, 00:29:23.918 "claimed": false, 00:29:23.918 "zoned": false, 00:29:23.918 "supported_io_types": { 00:29:23.918 "read": true, 00:29:23.918 "write": true, 00:29:23.918 "unmap": false, 00:29:23.918 "flush": true, 00:29:23.918 "reset": true, 00:29:23.918 "nvme_admin": true, 00:29:23.918 "nvme_io": true, 00:29:23.918 "nvme_io_md": false, 00:29:23.918 "write_zeroes": true, 00:29:23.918 "zcopy": false, 00:29:23.918 "get_zone_info": false, 00:29:23.918 "zone_management": false, 00:29:23.918 "zone_append": false, 00:29:23.918 "compare": true, 00:29:23.918 "compare_and_write": true, 00:29:23.918 "abort": true, 00:29:23.918 "seek_hole": false, 00:29:23.918 "seek_data": false, 00:29:23.918 "copy": true, 00:29:23.918 "nvme_iov_md": false 00:29:23.918 }, 00:29:23.918 "memory_domains": [ 00:29:23.918 { 00:29:23.918 "dma_device_id": "system", 00:29:23.918 "dma_device_type": 1 00:29:23.918 } 00:29:23.918 ], 00:29:23.918 "driver_specific": { 00:29:23.918 "nvme": [ 00:29:23.918 { 00:29:23.918 "trid": { 00:29:23.918 "trtype": "TCP", 00:29:23.918 "adrfam": "IPv4", 00:29:23.918 "traddr": "10.0.0.2", 00:29:23.918 "trsvcid": "4421", 00:29:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:23.918 }, 00:29:23.918 "ctrlr_data": { 00:29:23.918 "cntlid": 3, 00:29:23.918 "vendor_id": "0x8086", 00:29:23.918 "model_number": "SPDK bdev Controller", 00:29:23.918 "serial_number": "00000000000000000000", 00:29:23.918 "firmware_revision": "25.01", 00:29:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.918 "oacs": { 00:29:23.918 "security": 0, 00:29:23.918 "format": 0, 00:29:23.918 "firmware": 0, 00:29:23.918 "ns_manage": 0 00:29:23.918 }, 00:29:23.918 "multi_ctrlr": true, 00:29:23.918 "ana_reporting": false 00:29:23.918 }, 00:29:23.918 "vs": { 00:29:23.918 "nvme_version": "1.3" 00:29:23.918 }, 00:29:23.918 "ns_data": { 00:29:23.918 "id": 1, 00:29:23.918 "can_share": true 00:29:23.918 } 00:29:23.918 } 00:29:23.918 ], 00:29:23.918 "mp_policy": "active_passive" 00:29:23.918 } 00:29:23.918 } 00:29:23.918 ] 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KGy359o9Qa 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.918 rmmod nvme_tcp 00:29:23.918 rmmod nvme_fabrics 00:29:23.918 rmmod nvme_keyring 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 445865 ']' 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 445865 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 445865 ']' 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 445865 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445865 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445865' 00:29:23.918 killing process with pid 445865 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 445865 00:29:23.918 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 445865 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.178 22:38:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.083 22:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.083 00:29:26.083 real 0m9.373s 00:29:26.083 user 0m3.024s 00:29:26.083 sys 0m4.747s 00:29:26.083 22:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.083 22:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.083 ************************************ 00:29:26.083 END TEST nvmf_async_init 00:29:26.083 ************************************ 00:29:26.083 22:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:26.342 22:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:26.342 22:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.342 22:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.342 ************************************ 00:29:26.342 START TEST dma 00:29:26.342 ************************************ 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:26.342 * Looking for test storage... 00:29:26.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:26.342 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.343 --rc genhtml_branch_coverage=1 00:29:26.343 --rc genhtml_function_coverage=1 00:29:26.343 --rc genhtml_legend=1 00:29:26.343 --rc geninfo_all_blocks=1 00:29:26.343 --rc geninfo_unexecuted_blocks=1 00:29:26.343 00:29:26.343 ' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.343 --rc genhtml_branch_coverage=1 00:29:26.343 --rc genhtml_function_coverage=1 00:29:26.343 --rc genhtml_legend=1 00:29:26.343 --rc geninfo_all_blocks=1 00:29:26.343 --rc geninfo_unexecuted_blocks=1 00:29:26.343 00:29:26.343 ' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.343 --rc genhtml_branch_coverage=1 00:29:26.343 --rc genhtml_function_coverage=1 00:29:26.343 --rc genhtml_legend=1 00:29:26.343 --rc geninfo_all_blocks=1 00:29:26.343 --rc geninfo_unexecuted_blocks=1 00:29:26.343 00:29:26.343 ' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.343 --rc genhtml_branch_coverage=1 00:29:26.343 --rc genhtml_function_coverage=1 00:29:26.343 --rc genhtml_legend=1 00:29:26.343 --rc geninfo_all_blocks=1 00:29:26.343 --rc geninfo_unexecuted_blocks=1 00:29:26.343 00:29:26.343 ' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:26.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:26.343 00:29:26.343 real 0m0.212s 00:29:26.343 user 0m0.130s 00:29:26.343 sys 0m0.096s 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.343 22:38:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:26.343 ************************************ 00:29:26.343 END TEST dma 00:29:26.343 ************************************ 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.603 ************************************ 00:29:26.603 START TEST nvmf_identify 00:29:26.603 ************************************ 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:26.603 * Looking for test storage... 00:29:26.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.603 --rc genhtml_branch_coverage=1 00:29:26.603 --rc genhtml_function_coverage=1 00:29:26.603 --rc genhtml_legend=1 00:29:26.603 --rc geninfo_all_blocks=1 00:29:26.603 --rc geninfo_unexecuted_blocks=1 00:29:26.603 00:29:26.603 ' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.603 --rc genhtml_branch_coverage=1 00:29:26.603 --rc genhtml_function_coverage=1 00:29:26.603 --rc genhtml_legend=1 00:29:26.603 --rc geninfo_all_blocks=1 00:29:26.603 --rc geninfo_unexecuted_blocks=1 00:29:26.603 00:29:26.603 ' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.603 --rc genhtml_branch_coverage=1 00:29:26.603 --rc genhtml_function_coverage=1 00:29:26.603 --rc genhtml_legend=1 00:29:26.603 --rc geninfo_all_blocks=1 00:29:26.603 --rc geninfo_unexecuted_blocks=1 00:29:26.603 00:29:26.603 ' 00:29:26.603 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.603 --rc genhtml_branch_coverage=1 00:29:26.603 --rc genhtml_function_coverage=1 00:29:26.603 --rc genhtml_legend=1 00:29:26.603 --rc geninfo_all_blocks=1 00:29:26.603 --rc geninfo_unexecuted_blocks=1 00:29:26.603 00:29:26.603 ' 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:26.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.604 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.863 22:38:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.439 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:33.440 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:33.440 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:33.440 Found net devices under 0000:af:00.0: cvl_0_0 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:33.440 Found net devices under 0000:af:00.1: cvl_0_1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:29:33.440 00:29:33.440 --- 10.0.0.2 ping statistics --- 00:29:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.440 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:33.440 00:29:33.440 --- 10.0.0.1 ping statistics --- 00:29:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.440 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=449622 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 449622 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 449622 ']' 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.440 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.440 [2024-12-14 22:38:53.390083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:33.440 [2024-12-14 22:38:53.390132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.440 [2024-12-14 22:38:53.468846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.440 [2024-12-14 22:38:53.493736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.440 [2024-12-14 22:38:53.493773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.440 [2024-12-14 22:38:53.493780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.440 [2024-12-14 22:38:53.493786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.440 [2024-12-14 22:38:53.493792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.440 [2024-12-14 22:38:53.495216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.440 [2024-12-14 22:38:53.495327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.440 [2024-12-14 22:38:53.495435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.440 [2024-12-14 22:38:53.495436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 [2024-12-14 22:38:53.592284] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 Malloc0 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 [2024-12-14 22:38:53.705371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.441 [ 00:29:33.441 { 00:29:33.441 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:33.441 "subtype": "Discovery", 00:29:33.441 "listen_addresses": [ 00:29:33.441 { 00:29:33.441 "trtype": "TCP", 00:29:33.441 "adrfam": "IPv4", 00:29:33.441 "traddr": "10.0.0.2", 00:29:33.441 "trsvcid": "4420" 00:29:33.441 } 00:29:33.441 ], 00:29:33.441 "allow_any_host": true, 00:29:33.441 "hosts": [] 00:29:33.441 }, 00:29:33.441 { 00:29:33.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.441 "subtype": "NVMe", 00:29:33.441 "listen_addresses": [ 00:29:33.441 { 00:29:33.441 "trtype": "TCP", 00:29:33.441 "adrfam": "IPv4", 00:29:33.441 "traddr": "10.0.0.2", 00:29:33.441 "trsvcid": "4420" 00:29:33.441 } 00:29:33.441 ], 00:29:33.441 "allow_any_host": true, 00:29:33.441 "hosts": [], 00:29:33.441 "serial_number": "SPDK00000000000001", 00:29:33.441 "model_number": "SPDK bdev Controller", 00:29:33.441 "max_namespaces": 32, 00:29:33.441 "min_cntlid": 1, 00:29:33.441 "max_cntlid": 65519, 00:29:33.441 "namespaces": [ 00:29:33.441 { 00:29:33.441 "nsid": 1, 00:29:33.441 "bdev_name": "Malloc0", 00:29:33.441 "name": "Malloc0", 00:29:33.441 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:33.441 "eui64": "ABCDEF0123456789", 00:29:33.441 "uuid": "72ef498e-1d14-4a82-932c-38f0fc61a004" 00:29:33.441 } 00:29:33.441 ] 00:29:33.441 } 00:29:33.441 ] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.441 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:33.441 [2024-12-14 22:38:53.761669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:33.441 [2024-12-14 22:38:53.761718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449651 ] 00:29:33.441 [2024-12-14 22:38:53.804086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:33.441 [2024-12-14 22:38:53.804133] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:33.441 [2024-12-14 22:38:53.804138] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:33.441 [2024-12-14 22:38:53.804149] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:33.441 [2024-12-14 22:38:53.804157] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:33.441 [2024-12-14 22:38:53.804690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:33.441 [2024-12-14 22:38:53.804720] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a68de0 0 00:29:33.441 [2024-12-14 22:38:53.810919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:33.441 [2024-12-14 22:38:53.810932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:33.441 [2024-12-14 22:38:53.810936] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:33.441 [2024-12-14 22:38:53.810939] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:33.441 [2024-12-14 22:38:53.810965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.810970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.810974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.441 [2024-12-14 22:38:53.810986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:33.441 [2024-12-14 22:38:53.811002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.441 [2024-12-14 22:38:53.818913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.441 [2024-12-14 22:38:53.818923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.441 [2024-12-14 22:38:53.818927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.818931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.441 [2024-12-14 22:38:53.818942] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:33.441 [2024-12-14 22:38:53.818948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:33.441 [2024-12-14 22:38:53.818956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:33.441 [2024-12-14 22:38:53.818967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.818971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.818974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.441 [2024-12-14 22:38:53.818981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.441 [2024-12-14 22:38:53.818994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.441 [2024-12-14 22:38:53.819149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.441 [2024-12-14 22:38:53.819155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.441 [2024-12-14 22:38:53.819158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.441 [2024-12-14 22:38:53.819167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:33.441 [2024-12-14 22:38:53.819173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:33.441 [2024-12-14 22:38:53.819179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.441 [2024-12-14 22:38:53.819191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.441 [2024-12-14 22:38:53.819201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.441 [2024-12-14 22:38:53.819263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.441 [2024-12-14 22:38:53.819269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.441 [2024-12-14 22:38:53.819272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.441 [2024-12-14 22:38:53.819280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:33.441 [2024-12-14 22:38:53.819286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:33.441 [2024-12-14 22:38:53.819292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.441 [2024-12-14 22:38:53.819299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.441 [2024-12-14 22:38:53.819304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.441 [2024-12-14 22:38:53.819313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.441 [2024-12-14 22:38:53.819377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.441 [2024-12-14 22:38:53.819382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.819385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.819393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:33.442 [2024-12-14 22:38:53.819401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.819415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.442 [2024-12-14 22:38:53.819424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.819486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.819491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.819494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.819501] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:33.442 [2024-12-14 22:38:53.819506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:33.442 [2024-12-14 22:38:53.819512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:33.442 [2024-12-14 22:38:53.819620] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:33.442 [2024-12-14 22:38:53.819624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:33.442 [2024-12-14 22:38:53.819631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.819643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.442 [2024-12-14 22:38:53.819652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.819712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.819718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.819721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.819728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:33.442 [2024-12-14 22:38:53.819736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.819748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.442 [2024-12-14 22:38:53.819757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.819813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.819819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.819822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.819829] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:33.442 [2024-12-14 22:38:53.819835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.819841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:33.442 [2024-12-14 22:38:53.819852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.819859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.819868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.442 [2024-12-14 22:38:53.819877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.819971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.442 [2024-12-14 22:38:53.819978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.442 [2024-12-14 22:38:53.819981] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.819984] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a68de0): datao=0, datal=4096, cccid=0 00:29:33.442 [2024-12-14 22:38:53.819989] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac3f40) on tqpair(0x1a68de0): expected_datao=0, payload_size=4096 00:29:33.442 [2024-12-14 22:38:53.819993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.820006] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.820010] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.864913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.864924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.864927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.864930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.864939] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:33.442 [2024-12-14 22:38:53.864943] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:33.442 [2024-12-14 22:38:53.864947] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:33.442 [2024-12-14 22:38:53.864952] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:33.442 [2024-12-14 22:38:53.864956] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:33.442 [2024-12-14 22:38:53.864961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.864973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.864981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.864985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.864988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.864996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:33.442 [2024-12-14 22:38:53.865008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.865156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.865162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.442 [2024-12-14 22:38:53.865165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.442 [2024-12-14 22:38:53.865174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.865186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.442 [2024-12-14 22:38:53.865191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.865203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.442 [2024-12-14 22:38:53.865208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.865219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.442 [2024-12-14 22:38:53.865224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.865235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.442 [2024-12-14 22:38:53.865239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.865251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:33.442 [2024-12-14 22:38:53.865257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.442 [2024-12-14 22:38:53.865260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a68de0) 00:29:33.442 [2024-12-14 22:38:53.865266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.442 [2024-12-14 22:38:53.865277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac3f40, cid 0, qid 0 00:29:33.442 [2024-12-14 22:38:53.865281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac40c0, cid 1, qid 0 00:29:33.442 [2024-12-14 22:38:53.865286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4240, cid 2, qid 0 00:29:33.442 [2024-12-14 22:38:53.865290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.442 [2024-12-14 22:38:53.865293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4540, cid 4, qid 0 00:29:33.442 [2024-12-14 22:38:53.865393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.442 [2024-12-14 22:38:53.865399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.865402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4540) on tqpair=0x1a68de0 00:29:33.443 [2024-12-14 22:38:53.865411] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:33.443 [2024-12-14 22:38:53.865416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:33.443 [2024-12-14 22:38:53.865425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a68de0) 00:29:33.443 [2024-12-14 22:38:53.865435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.443 [2024-12-14 22:38:53.865444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4540, cid 4, qid 0 00:29:33.443 [2024-12-14 22:38:53.865517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.443 [2024-12-14 22:38:53.865523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.443 [2024-12-14 22:38:53.865526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865529] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a68de0): datao=0, datal=4096, cccid=4 00:29:33.443 [2024-12-14 22:38:53.865533] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4540) on tqpair(0x1a68de0): expected_datao=0, payload_size=4096 00:29:33.443 [2024-12-14 22:38:53.865537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865542] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865546] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.443 [2024-12-14 22:38:53.865571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.865574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4540) on tqpair=0x1a68de0 00:29:33.443 [2024-12-14 22:38:53.865589] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:33.443 [2024-12-14 22:38:53.865610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a68de0) 00:29:33.443 [2024-12-14 22:38:53.865620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.443 [2024-12-14 22:38:53.865625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a68de0) 00:29:33.443 [2024-12-14 22:38:53.865637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.443 [2024-12-14 22:38:53.865649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4540, cid 4, qid 0 00:29:33.443 [2024-12-14 22:38:53.865654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac46c0, cid 5, qid 0 00:29:33.443 [2024-12-14 22:38:53.865766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.443 [2024-12-14 22:38:53.865771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.443 [2024-12-14 22:38:53.865774] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865777] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a68de0): datao=0, datal=1024, cccid=4 00:29:33.443 [2024-12-14 22:38:53.865781] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4540) on tqpair(0x1a68de0): expected_datao=0, payload_size=1024 00:29:33.443 [2024-12-14 22:38:53.865785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865795] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.443 [2024-12-14 22:38:53.865805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.865808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.865811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac46c0) on tqpair=0x1a68de0 00:29:33.443 [2024-12-14 22:38:53.906080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.443 [2024-12-14 22:38:53.906092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.906096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4540) on tqpair=0x1a68de0 00:29:33.443 [2024-12-14 22:38:53.906110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a68de0) 00:29:33.443 [2024-12-14 22:38:53.906121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.443 [2024-12-14 22:38:53.906137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4540, cid 4, qid 0 00:29:33.443 [2024-12-14 22:38:53.906209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.443 [2024-12-14 22:38:53.906214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.443 [2024-12-14 22:38:53.906217] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906220] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a68de0): datao=0, datal=3072, cccid=4 00:29:33.443 [2024-12-14 22:38:53.906225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4540) on tqpair(0x1a68de0): expected_datao=0, payload_size=3072 00:29:33.443 [2024-12-14 22:38:53.906229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906252] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906256] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.443 [2024-12-14 22:38:53.906338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.906341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4540) on tqpair=0x1a68de0 00:29:33.443 [2024-12-14 22:38:53.906351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a68de0) 00:29:33.443 [2024-12-14 22:38:53.906360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.443 [2024-12-14 22:38:53.906374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac4540, cid 4, qid 0 00:29:33.443 [2024-12-14 22:38:53.906440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.443 [2024-12-14 22:38:53.906445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.443 [2024-12-14 22:38:53.906449] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906452] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a68de0): datao=0, datal=8, cccid=4 00:29:33.443 [2024-12-14 22:38:53.906456] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac4540) on tqpair(0x1a68de0): expected_datao=0, payload_size=8 00:29:33.443 [2024-12-14 22:38:53.906459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906465] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.906471] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.948092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.443 [2024-12-14 22:38:53.948103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.443 [2024-12-14 22:38:53.948107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.443 [2024-12-14 22:38:53.948110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4540) on tqpair=0x1a68de0 00:29:33.443 ===================================================== 00:29:33.443 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:33.443 ===================================================== 00:29:33.443 Controller Capabilities/Features 00:29:33.443 ================================ 00:29:33.443 Vendor ID: 0000 00:29:33.443 Subsystem Vendor ID: 0000 00:29:33.443 Serial Number: .................... 00:29:33.443 Model Number: ........................................ 00:29:33.443 Firmware Version: 25.01 00:29:33.443 Recommended Arb Burst: 0 00:29:33.443 IEEE OUI Identifier: 00 00 00 00:29:33.443 Multi-path I/O 00:29:33.443 May have multiple subsystem ports: No 00:29:33.443 May have multiple controllers: No 00:29:33.443 Associated with SR-IOV VF: No 00:29:33.443 Max Data Transfer Size: 131072 00:29:33.443 Max Number of Namespaces: 0 00:29:33.443 Max Number of I/O Queues: 1024 00:29:33.443 NVMe Specification Version (VS): 1.3 00:29:33.443 NVMe Specification Version (Identify): 1.3 00:29:33.443 Maximum Queue Entries: 128 00:29:33.443 Contiguous Queues Required: Yes 00:29:33.443 Arbitration Mechanisms Supported 00:29:33.443 Weighted Round Robin: Not Supported 00:29:33.443 Vendor Specific: Not Supported 00:29:33.443 Reset Timeout: 15000 ms 00:29:33.443 Doorbell Stride: 4 bytes 00:29:33.443 NVM Subsystem Reset: Not Supported 00:29:33.443 Command Sets Supported 00:29:33.443 NVM Command Set: Supported 00:29:33.443 Boot Partition: Not Supported 00:29:33.443 Memory Page Size Minimum: 4096 bytes 00:29:33.443 Memory Page Size Maximum: 4096 bytes 00:29:33.443 Persistent Memory Region: Not Supported 00:29:33.443 Optional Asynchronous Events Supported 00:29:33.443 Namespace Attribute Notices: Not Supported 00:29:33.443 Firmware Activation Notices: Not Supported 00:29:33.443 ANA Change Notices: Not Supported 00:29:33.443 PLE Aggregate Log Change Notices: Not Supported 00:29:33.443 LBA Status Info Alert Notices: Not Supported 00:29:33.443 EGE Aggregate Log Change Notices: Not Supported 00:29:33.443 Normal NVM Subsystem Shutdown event: Not Supported 00:29:33.443 Zone Descriptor Change Notices: Not Supported 00:29:33.443 Discovery Log Change Notices: Supported 00:29:33.443 Controller Attributes 00:29:33.443 128-bit Host Identifier: Not Supported 00:29:33.443 Non-Operational Permissive Mode: Not Supported 00:29:33.443 NVM Sets: Not Supported 00:29:33.443 Read Recovery Levels: Not Supported 00:29:33.443 Endurance Groups: Not Supported 00:29:33.443 Predictable Latency Mode: Not Supported 00:29:33.443 Traffic Based Keep ALive: Not Supported 00:29:33.443 Namespace Granularity: Not Supported 00:29:33.444 SQ Associations: Not Supported 00:29:33.444 UUID List: Not Supported 00:29:33.444 Multi-Domain Subsystem: Not Supported 00:29:33.444 Fixed Capacity Management: Not Supported 00:29:33.444 Variable Capacity Management: Not Supported 00:29:33.444 Delete Endurance Group: Not Supported 00:29:33.444 Delete NVM Set: Not Supported 00:29:33.444 Extended LBA Formats Supported: Not Supported 00:29:33.444 Flexible Data Placement Supported: Not Supported 00:29:33.444 00:29:33.444 Controller Memory Buffer Support 00:29:33.444 ================================ 00:29:33.444 Supported: No 00:29:33.444 00:29:33.444 Persistent Memory Region Support 00:29:33.444 ================================ 00:29:33.444 Supported: No 00:29:33.444 00:29:33.444 Admin Command Set Attributes 00:29:33.444 ============================ 00:29:33.444 Security Send/Receive: Not Supported 00:29:33.444 Format NVM: Not Supported 00:29:33.444 Firmware Activate/Download: Not Supported 00:29:33.444 Namespace Management: Not Supported 00:29:33.444 Device Self-Test: Not Supported 00:29:33.444 Directives: Not Supported 00:29:33.444 NVMe-MI: Not Supported 00:29:33.444 Virtualization Management: Not Supported 00:29:33.444 Doorbell Buffer Config: Not Supported 00:29:33.444 Get LBA Status Capability: Not Supported 00:29:33.444 Command & Feature Lockdown Capability: Not Supported 00:29:33.444 Abort Command Limit: 1 00:29:33.444 Async Event Request Limit: 4 00:29:33.444 Number of Firmware Slots: N/A 00:29:33.444 Firmware Slot 1 Read-Only: N/A 00:29:33.444 Firmware Activation Without Reset: N/A 00:29:33.444 Multiple Update Detection Support: N/A 00:29:33.444 Firmware Update Granularity: No Information Provided 00:29:33.444 Per-Namespace SMART Log: No 00:29:33.444 Asymmetric Namespace Access Log Page: Not Supported 00:29:33.444 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:33.444 Command Effects Log Page: Not Supported 00:29:33.444 Get Log Page Extended Data: Supported 00:29:33.444 Telemetry Log Pages: Not Supported 00:29:33.444 Persistent Event Log Pages: Not Supported 00:29:33.444 Supported Log Pages Log Page: May Support 00:29:33.444 Commands Supported & Effects Log Page: Not Supported 00:29:33.444 Feature Identifiers & Effects Log Page:May Support 00:29:33.444 NVMe-MI Commands & Effects Log Page: May Support 00:29:33.444 Data Area 4 for Telemetry Log: Not Supported 00:29:33.444 Error Log Page Entries Supported: 128 00:29:33.444 Keep Alive: Not Supported 00:29:33.444 00:29:33.444 NVM Command Set Attributes 00:29:33.444 ========================== 00:29:33.444 Submission Queue Entry Size 00:29:33.444 Max: 1 00:29:33.444 Min: 1 00:29:33.444 Completion Queue Entry Size 00:29:33.444 Max: 1 00:29:33.444 Min: 1 00:29:33.444 Number of Namespaces: 0 00:29:33.444 Compare Command: Not Supported 00:29:33.444 Write Uncorrectable Command: Not Supported 00:29:33.444 Dataset Management Command: Not Supported 00:29:33.444 Write Zeroes Command: Not Supported 00:29:33.444 Set Features Save Field: Not Supported 00:29:33.444 Reservations: Not Supported 00:29:33.444 Timestamp: Not Supported 00:29:33.444 Copy: Not Supported 00:29:33.444 Volatile Write Cache: Not Present 00:29:33.444 Atomic Write Unit (Normal): 1 00:29:33.444 Atomic Write Unit (PFail): 1 00:29:33.444 Atomic Compare & Write Unit: 1 00:29:33.444 Fused Compare & Write: Supported 00:29:33.444 Scatter-Gather List 00:29:33.444 SGL Command Set: Supported 00:29:33.444 SGL Keyed: Supported 00:29:33.444 SGL Bit Bucket Descriptor: Not Supported 00:29:33.444 SGL Metadata Pointer: Not Supported 00:29:33.444 Oversized SGL: Not Supported 00:29:33.444 SGL Metadata Address: Not Supported 00:29:33.444 SGL Offset: Supported 00:29:33.444 Transport SGL Data Block: Not Supported 00:29:33.444 Replay Protected Memory Block: Not Supported 00:29:33.444 00:29:33.444 Firmware Slot Information 00:29:33.444 ========================= 00:29:33.444 Active slot: 0 00:29:33.444 00:29:33.444 00:29:33.444 Error Log 00:29:33.444 ========= 00:29:33.444 00:29:33.444 Active Namespaces 00:29:33.444 ================= 00:29:33.444 Discovery Log Page 00:29:33.444 ================== 00:29:33.444 Generation Counter: 2 00:29:33.444 Number of Records: 2 00:29:33.444 Record Format: 0 00:29:33.444 00:29:33.444 Discovery Log Entry 0 00:29:33.444 ---------------------- 00:29:33.444 Transport Type: 3 (TCP) 00:29:33.444 Address Family: 1 (IPv4) 00:29:33.444 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:33.444 Entry Flags: 00:29:33.444 Duplicate Returned Information: 1 00:29:33.444 Explicit Persistent Connection Support for Discovery: 1 00:29:33.444 Transport Requirements: 00:29:33.444 Secure Channel: Not Required 00:29:33.444 Port ID: 0 (0x0000) 00:29:33.444 Controller ID: 65535 (0xffff) 00:29:33.444 Admin Max SQ Size: 128 00:29:33.444 Transport Service Identifier: 4420 00:29:33.444 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:33.444 Transport Address: 10.0.0.2 00:29:33.444 Discovery Log Entry 1 00:29:33.444 ---------------------- 00:29:33.444 Transport Type: 3 (TCP) 00:29:33.444 Address Family: 1 (IPv4) 00:29:33.444 Subsystem Type: 2 (NVM Subsystem) 00:29:33.444 Entry Flags: 00:29:33.444 Duplicate Returned Information: 0 00:29:33.444 Explicit Persistent Connection Support for Discovery: 0 00:29:33.444 Transport Requirements: 00:29:33.444 Secure Channel: Not Required 00:29:33.444 Port ID: 0 (0x0000) 00:29:33.444 Controller ID: 65535 (0xffff) 00:29:33.444 Admin Max SQ Size: 128 00:29:33.444 Transport Service Identifier: 4420 00:29:33.444 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:33.444 Transport Address: 10.0.0.2 [2024-12-14 22:38:53.948190] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:33.444 [2024-12-14 22:38:53.948200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac3f40) on tqpair=0x1a68de0 00:29:33.444 [2024-12-14 22:38:53.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.444 [2024-12-14 22:38:53.948210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac40c0) on tqpair=0x1a68de0 00:29:33.444 [2024-12-14 22:38:53.948215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.444 [2024-12-14 22:38:53.948219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac4240) on tqpair=0x1a68de0 00:29:33.444 [2024-12-14 22:38:53.948223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.444 [2024-12-14 22:38:53.948227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.444 [2024-12-14 22:38:53.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.444 [2024-12-14 22:38:53.948238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.444 [2024-12-14 22:38:53.948242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.444 [2024-12-14 22:38:53.948245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.444 [2024-12-14 22:38:53.948252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.444 [2024-12-14 22:38:53.948265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.444 [2024-12-14 22:38:53.948332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.444 [2024-12-14 22:38:53.948338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.444 [2024-12-14 22:38:53.948341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.444 [2024-12-14 22:38:53.948344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.948350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.948362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.948373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.948481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.948486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.948489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.948496] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:33.445 [2024-12-14 22:38:53.948500] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:33.445 [2024-12-14 22:38:53.948507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.948521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.948530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.948588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.948594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.948597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.948608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.948620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.948628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.948733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.948738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.948741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.948752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.948763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.948772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.948884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.948889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.948892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.948907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.948917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.948924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.948950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.949800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.445 [2024-12-14 22:38:53.949811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.949817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.445 [2024-12-14 22:38:53.949822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.445 [2024-12-14 22:38:53.949831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.445 [2024-12-14 22:38:53.949893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.445 [2024-12-14 22:38:53.949898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.445 [2024-12-14 22:38:53.953905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.445 [2024-12-14 22:38:53.953912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.446 [2024-12-14 22:38:53.953922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:53.953926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:53.953929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a68de0) 00:29:33.446 [2024-12-14 22:38:53.953935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:53.953945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac43c0, cid 3, qid 0 00:29:33.446 [2024-12-14 22:38:53.954131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:53.954137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:53.954140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:53.954143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac43c0) on tqpair=0x1a68de0 00:29:33.446 [2024-12-14 22:38:53.954150] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:33.446 00:29:33.446 22:38:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:33.446 [2024-12-14 22:38:53.986157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:33.446 [2024-12-14 22:38:53.986189] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449653 ] 00:29:33.446 [2024-12-14 22:38:54.023897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:33.446 [2024-12-14 22:38:54.023939] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:33.446 [2024-12-14 22:38:54.023944] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:33.446 [2024-12-14 22:38:54.023954] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:33.446 [2024-12-14 22:38:54.023961] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:33.446 [2024-12-14 22:38:54.028053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:33.446 [2024-12-14 22:38:54.028080] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1761de0 0 00:29:33.446 [2024-12-14 22:38:54.034916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:33.446 [2024-12-14 22:38:54.034929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:33.446 [2024-12-14 22:38:54.034933] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:33.446 [2024-12-14 22:38:54.034936] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:33.446 [2024-12-14 22:38:54.034958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.034963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.034967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.034978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:33.446 [2024-12-14 22:38:54.034994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.041912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.041921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.041924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.041928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.041937] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:33.446 [2024-12-14 22:38:54.041943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:33.446 [2024-12-14 22:38:54.041947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:33.446 [2024-12-14 22:38:54.041957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.041961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.041964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.041971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.041984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.042160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:33.446 [2024-12-14 22:38:54.042166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:33.446 [2024-12-14 22:38:54.042172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.042185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.042195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.042278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:33.446 [2024-12-14 22:38:54.042285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.042302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.042312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.042389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.042409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.042418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.042495] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:33.446 [2024-12-14 22:38:54.042500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:33.446 [2024-12-14 22:38:54.042619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.042636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.042646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.446 [2024-12-14 22:38:54.042726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:33.446 [2024-12-14 22:38:54.042734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.446 [2024-12-14 22:38:54.042740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.446 [2024-12-14 22:38:54.042746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.446 [2024-12-14 22:38:54.042755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.446 [2024-12-14 22:38:54.042820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.446 [2024-12-14 22:38:54.042826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.446 [2024-12-14 22:38:54.042829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.042832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.042836] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:33.447 [2024-12-14 22:38:54.042840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.042846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:33.447 [2024-12-14 22:38:54.042853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.042860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.042864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.042869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.447 [2024-12-14 22:38:54.042880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.447 [2024-12-14 22:38:54.042968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.447 [2024-12-14 22:38:54.042974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.447 [2024-12-14 22:38:54.042977] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.042981] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=4096, cccid=0 00:29:33.447 [2024-12-14 22:38:54.042985] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bcf40) on tqpair(0x1761de0): expected_datao=0, payload_size=4096 00:29:33.447 [2024-12-14 22:38:54.042988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.042999] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.043003] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.084909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.447 [2024-12-14 22:38:54.084919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.447 [2024-12-14 22:38:54.084923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.084926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.084933] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:33.447 [2024-12-14 22:38:54.084937] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:33.447 [2024-12-14 22:38:54.084941] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:33.447 [2024-12-14 22:38:54.084945] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:33.447 [2024-12-14 22:38:54.084949] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:33.447 [2024-12-14 22:38:54.084953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.084965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.084973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.084977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.084981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.084987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:33.447 [2024-12-14 22:38:54.085001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.447 [2024-12-14 22:38:54.085063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.447 [2024-12-14 22:38:54.085068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.447 [2024-12-14 22:38:54.085071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.085080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.447 [2024-12-14 22:38:54.085097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.447 [2024-12-14 22:38:54.085116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.447 [2024-12-14 22:38:54.085132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.447 [2024-12-14 22:38:54.085147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.447 [2024-12-14 22:38:54.085183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bcf40, cid 0, qid 0 00:29:33.447 [2024-12-14 22:38:54.085187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd0c0, cid 1, qid 0 00:29:33.447 [2024-12-14 22:38:54.085191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd240, cid 2, qid 0 00:29:33.447 [2024-12-14 22:38:54.085195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.447 [2024-12-14 22:38:54.085199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.447 [2024-12-14 22:38:54.085294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.447 [2024-12-14 22:38:54.085299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.447 [2024-12-14 22:38:54.085302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.085310] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:33.447 [2024-12-14 22:38:54.085314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:33.447 [2024-12-14 22:38:54.085359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.447 [2024-12-14 22:38:54.085420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.447 [2024-12-14 22:38:54.085426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.447 [2024-12-14 22:38:54.085429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.085480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:33.447 [2024-12-14 22:38:54.085496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.447 [2024-12-14 22:38:54.085504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.447 [2024-12-14 22:38:54.085514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.447 [2024-12-14 22:38:54.085592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.447 [2024-12-14 22:38:54.085598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.447 [2024-12-14 22:38:54.085601] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=4096, cccid=4 00:29:33.447 [2024-12-14 22:38:54.085608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd540) on tqpair(0x1761de0): expected_datao=0, payload_size=4096 00:29:33.447 [2024-12-14 22:38:54.085612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085621] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.447 [2024-12-14 22:38:54.085639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.447 [2024-12-14 22:38:54.085642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.447 [2024-12-14 22:38:54.085645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.447 [2024-12-14 22:38:54.085657] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:33.447 [2024-12-14 22:38:54.085664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.085673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.085678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.085687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.085697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.448 [2024-12-14 22:38:54.085783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.448 [2024-12-14 22:38:54.085789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.448 [2024-12-14 22:38:54.085792] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085795] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=4096, cccid=4 00:29:33.448 [2024-12-14 22:38:54.085801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd540) on tqpair(0x1761de0): expected_datao=0, payload_size=4096 00:29:33.448 [2024-12-14 22:38:54.085804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085810] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085813] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.085830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.085833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.085846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.085855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.085861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.085870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.085880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.448 [2024-12-14 22:38:54.085954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.448 [2024-12-14 22:38:54.085962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.448 [2024-12-14 22:38:54.085965] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085968] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=4096, cccid=4 00:29:33.448 [2024-12-14 22:38:54.085972] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd540) on tqpair(0x1761de0): expected_datao=0, payload_size=4096 00:29:33.448 [2024-12-14 22:38:54.085976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085981] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085984] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.085996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086050] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:33.448 [2024-12-14 22:38:54.086054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:33.448 [2024-12-14 22:38:54.086059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:33.448 [2024-12-14 22:38:54.086071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.086087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.448 [2024-12-14 22:38:54.086110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.448 [2024-12-14 22:38:54.086115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd6c0, cid 5, qid 0 00:29:33.448 [2024-12-14 22:38:54.086192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd6c0) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.086247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd6c0, cid 5, qid 0 00:29:33.448 [2024-12-14 22:38:54.086307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd6c0) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.086344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd6c0, cid 5, qid 0 00:29:33.448 [2024-12-14 22:38:54.086405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd6c0) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.086444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd6c0, cid 5, qid 0 00:29:33.448 [2024-12-14 22:38:54.086505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.448 [2024-12-14 22:38:54.086511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.448 [2024-12-14 22:38:54.086514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd6c0) on tqpair=0x1761de0 00:29:33.448 [2024-12-14 22:38:54.086528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1761de0) 00:29:33.448 [2024-12-14 22:38:54.086537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.448 [2024-12-14 22:38:54.086543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.448 [2024-12-14 22:38:54.086546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1761de0) 00:29:33.449 [2024-12-14 22:38:54.086551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.449 [2024-12-14 22:38:54.086557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1761de0) 00:29:33.449 [2024-12-14 22:38:54.086565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.449 [2024-12-14 22:38:54.086572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1761de0) 00:29:33.449 [2024-12-14 22:38:54.086580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.449 [2024-12-14 22:38:54.086590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd6c0, cid 5, qid 0 00:29:33.449 [2024-12-14 22:38:54.086595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd540, cid 4, qid 0 00:29:33.449 [2024-12-14 22:38:54.086599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd840, cid 6, qid 0 00:29:33.449 [2024-12-14 22:38:54.086603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd9c0, cid 7, qid 0 00:29:33.449 [2024-12-14 22:38:54.086736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.449 [2024-12-14 22:38:54.086742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.449 [2024-12-14 22:38:54.086745] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086748] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=8192, cccid=5 00:29:33.449 [2024-12-14 22:38:54.086752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd6c0) on tqpair(0x1761de0): expected_datao=0, payload_size=8192 00:29:33.449 [2024-12-14 22:38:54.086755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086767] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086770] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.449 [2024-12-14 22:38:54.086786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.449 [2024-12-14 22:38:54.086789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086792] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=512, cccid=4 00:29:33.449 [2024-12-14 22:38:54.086796] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd540) on tqpair(0x1761de0): expected_datao=0, payload_size=512 00:29:33.449 [2024-12-14 22:38:54.086799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086805] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.449 [2024-12-14 22:38:54.086817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.449 [2024-12-14 22:38:54.086820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086823] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=512, cccid=6 00:29:33.449 [2024-12-14 22:38:54.086827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd840) on tqpair(0x1761de0): expected_datao=0, payload_size=512 00:29:33.449 [2024-12-14 22:38:54.086831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086836] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:33.449 [2024-12-14 22:38:54.086848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:33.449 [2024-12-14 22:38:54.086851] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086854] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1761de0): datao=0, datal=4096, cccid=7 00:29:33.449 [2024-12-14 22:38:54.086858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17bd9c0) on tqpair(0x1761de0): expected_datao=0, payload_size=4096 00:29:33.449 [2024-12-14 22:38:54.086862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086867] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.449 [2024-12-14 22:38:54.086882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.449 [2024-12-14 22:38:54.086885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd6c0) on tqpair=0x1761de0 00:29:33.449 [2024-12-14 22:38:54.086897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.449 [2024-12-14 22:38:54.086910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.449 [2024-12-14 22:38:54.086915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd540) on tqpair=0x1761de0 00:29:33.449 [2024-12-14 22:38:54.086930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.449 [2024-12-14 22:38:54.086935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.449 [2024-12-14 22:38:54.086938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd840) on tqpair=0x1761de0 00:29:33.449 [2024-12-14 22:38:54.086947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.449 [2024-12-14 22:38:54.086952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.449 [2024-12-14 22:38:54.086955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.449 [2024-12-14 22:38:54.086960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd9c0) on tqpair=0x1761de0 00:29:33.449 ===================================================== 00:29:33.449 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.449 ===================================================== 00:29:33.449 Controller Capabilities/Features 00:29:33.449 ================================ 00:29:33.449 Vendor ID: 8086 00:29:33.449 Subsystem Vendor ID: 8086 00:29:33.449 Serial Number: SPDK00000000000001 00:29:33.449 Model Number: SPDK bdev Controller 00:29:33.449 Firmware Version: 25.01 00:29:33.449 Recommended Arb Burst: 6 00:29:33.449 IEEE OUI Identifier: e4 d2 5c 00:29:33.449 Multi-path I/O 00:29:33.449 May have multiple subsystem ports: Yes 00:29:33.449 May have multiple controllers: Yes 00:29:33.449 Associated with SR-IOV VF: No 00:29:33.449 Max Data Transfer Size: 131072 00:29:33.449 Max Number of Namespaces: 32 00:29:33.449 Max Number of I/O Queues: 127 00:29:33.449 NVMe Specification Version (VS): 1.3 00:29:33.449 NVMe Specification Version (Identify): 1.3 00:29:33.449 Maximum Queue Entries: 128 00:29:33.449 Contiguous Queues Required: Yes 00:29:33.449 Arbitration Mechanisms Supported 00:29:33.449 Weighted Round Robin: Not Supported 00:29:33.449 Vendor Specific: Not Supported 00:29:33.449 Reset Timeout: 15000 ms 00:29:33.449 Doorbell Stride: 4 bytes 00:29:33.449 NVM Subsystem Reset: Not Supported 00:29:33.449 Command Sets Supported 00:29:33.449 NVM Command Set: Supported 00:29:33.449 Boot Partition: Not Supported 00:29:33.449 Memory Page Size Minimum: 4096 bytes 00:29:33.449 Memory Page Size Maximum: 4096 bytes 00:29:33.449 Persistent Memory Region: Not Supported 00:29:33.449 Optional Asynchronous Events Supported 00:29:33.449 Namespace Attribute Notices: Supported 00:29:33.449 Firmware Activation Notices: Not Supported 00:29:33.449 ANA Change Notices: Not Supported 00:29:33.449 PLE Aggregate Log Change Notices: Not Supported 00:29:33.449 LBA Status Info Alert Notices: Not Supported 00:29:33.449 EGE Aggregate Log Change Notices: Not Supported 00:29:33.449 Normal NVM Subsystem Shutdown event: Not Supported 00:29:33.449 Zone Descriptor Change Notices: Not Supported 00:29:33.449 Discovery Log Change Notices: Not Supported 00:29:33.449 Controller Attributes 00:29:33.449 128-bit Host Identifier: Supported 00:29:33.449 Non-Operational Permissive Mode: Not Supported 00:29:33.449 NVM Sets: Not Supported 00:29:33.449 Read Recovery Levels: Not Supported 00:29:33.449 Endurance Groups: Not Supported 00:29:33.449 Predictable Latency Mode: Not Supported 00:29:33.449 Traffic Based Keep ALive: Not Supported 00:29:33.449 Namespace Granularity: Not Supported 00:29:33.449 SQ Associations: Not Supported 00:29:33.449 UUID List: Not Supported 00:29:33.449 Multi-Domain Subsystem: Not Supported 00:29:33.449 Fixed Capacity Management: Not Supported 00:29:33.449 Variable Capacity Management: Not Supported 00:29:33.449 Delete Endurance Group: Not Supported 00:29:33.449 Delete NVM Set: Not Supported 00:29:33.449 Extended LBA Formats Supported: Not Supported 00:29:33.449 Flexible Data Placement Supported: Not Supported 00:29:33.449 00:29:33.449 Controller Memory Buffer Support 00:29:33.449 ================================ 00:29:33.449 Supported: No 00:29:33.449 00:29:33.449 Persistent Memory Region Support 00:29:33.449 ================================ 00:29:33.449 Supported: No 00:29:33.449 00:29:33.449 Admin Command Set Attributes 00:29:33.449 ============================ 00:29:33.449 Security Send/Receive: Not Supported 00:29:33.449 Format NVM: Not Supported 00:29:33.449 Firmware Activate/Download: Not Supported 00:29:33.449 Namespace Management: Not Supported 00:29:33.449 Device Self-Test: Not Supported 00:29:33.449 Directives: Not Supported 00:29:33.449 NVMe-MI: Not Supported 00:29:33.449 Virtualization Management: Not Supported 00:29:33.449 Doorbell Buffer Config: Not Supported 00:29:33.449 Get LBA Status Capability: Not Supported 00:29:33.449 Command & Feature Lockdown Capability: Not Supported 00:29:33.449 Abort Command Limit: 4 00:29:33.449 Async Event Request Limit: 4 00:29:33.449 Number of Firmware Slots: N/A 00:29:33.449 Firmware Slot 1 Read-Only: N/A 00:29:33.449 Firmware Activation Without Reset: N/A 00:29:33.449 Multiple Update Detection Support: N/A 00:29:33.449 Firmware Update Granularity: No Information Provided 00:29:33.449 Per-Namespace SMART Log: No 00:29:33.450 Asymmetric Namespace Access Log Page: Not Supported 00:29:33.450 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:33.450 Command Effects Log Page: Supported 00:29:33.450 Get Log Page Extended Data: Supported 00:29:33.450 Telemetry Log Pages: Not Supported 00:29:33.450 Persistent Event Log Pages: Not Supported 00:29:33.450 Supported Log Pages Log Page: May Support 00:29:33.450 Commands Supported & Effects Log Page: Not Supported 00:29:33.450 Feature Identifiers & Effects Log Page:May Support 00:29:33.450 NVMe-MI Commands & Effects Log Page: May Support 00:29:33.450 Data Area 4 for Telemetry Log: Not Supported 00:29:33.450 Error Log Page Entries Supported: 128 00:29:33.450 Keep Alive: Supported 00:29:33.450 Keep Alive Granularity: 10000 ms 00:29:33.450 00:29:33.450 NVM Command Set Attributes 00:29:33.450 ========================== 00:29:33.450 Submission Queue Entry Size 00:29:33.450 Max: 64 00:29:33.450 Min: 64 00:29:33.450 Completion Queue Entry Size 00:29:33.450 Max: 16 00:29:33.450 Min: 16 00:29:33.450 Number of Namespaces: 32 00:29:33.450 Compare Command: Supported 00:29:33.450 Write Uncorrectable Command: Not Supported 00:29:33.450 Dataset Management Command: Supported 00:29:33.450 Write Zeroes Command: Supported 00:29:33.450 Set Features Save Field: Not Supported 00:29:33.450 Reservations: Supported 00:29:33.450 Timestamp: Not Supported 00:29:33.450 Copy: Supported 00:29:33.450 Volatile Write Cache: Present 00:29:33.450 Atomic Write Unit (Normal): 1 00:29:33.450 Atomic Write Unit (PFail): 1 00:29:33.450 Atomic Compare & Write Unit: 1 00:29:33.450 Fused Compare & Write: Supported 00:29:33.450 Scatter-Gather List 00:29:33.450 SGL Command Set: Supported 00:29:33.450 SGL Keyed: Supported 00:29:33.450 SGL Bit Bucket Descriptor: Not Supported 00:29:33.450 SGL Metadata Pointer: Not Supported 00:29:33.450 Oversized SGL: Not Supported 00:29:33.450 SGL Metadata Address: Not Supported 00:29:33.450 SGL Offset: Supported 00:29:33.450 Transport SGL Data Block: Not Supported 00:29:33.450 Replay Protected Memory Block: Not Supported 00:29:33.450 00:29:33.450 Firmware Slot Information 00:29:33.450 ========================= 00:29:33.450 Active slot: 1 00:29:33.450 Slot 1 Firmware Revision: 25.01 00:29:33.450 00:29:33.450 00:29:33.450 Commands Supported and Effects 00:29:33.450 ============================== 00:29:33.450 Admin Commands 00:29:33.450 -------------- 00:29:33.450 Get Log Page (02h): Supported 00:29:33.450 Identify (06h): Supported 00:29:33.450 Abort (08h): Supported 00:29:33.450 Set Features (09h): Supported 00:29:33.450 Get Features (0Ah): Supported 00:29:33.450 Asynchronous Event Request (0Ch): Supported 00:29:33.450 Keep Alive (18h): Supported 00:29:33.450 I/O Commands 00:29:33.450 ------------ 00:29:33.450 Flush (00h): Supported LBA-Change 00:29:33.450 Write (01h): Supported LBA-Change 00:29:33.450 Read (02h): Supported 00:29:33.450 Compare (05h): Supported 00:29:33.450 Write Zeroes (08h): Supported LBA-Change 00:29:33.450 Dataset Management (09h): Supported LBA-Change 00:29:33.450 Copy (19h): Supported LBA-Change 00:29:33.450 00:29:33.450 Error Log 00:29:33.450 ========= 00:29:33.450 00:29:33.450 Arbitration 00:29:33.450 =========== 00:29:33.450 Arbitration Burst: 1 00:29:33.450 00:29:33.450 Power Management 00:29:33.450 ================ 00:29:33.450 Number of Power States: 1 00:29:33.450 Current Power State: Power State #0 00:29:33.450 Power State #0: 00:29:33.450 Max Power: 0.00 W 00:29:33.450 Non-Operational State: Operational 00:29:33.450 Entry Latency: Not Reported 00:29:33.450 Exit Latency: Not Reported 00:29:33.450 Relative Read Throughput: 0 00:29:33.450 Relative Read Latency: 0 00:29:33.450 Relative Write Throughput: 0 00:29:33.450 Relative Write Latency: 0 00:29:33.450 Idle Power: Not Reported 00:29:33.450 Active Power: Not Reported 00:29:33.450 Non-Operational Permissive Mode: Not Supported 00:29:33.450 00:29:33.450 Health Information 00:29:33.450 ================== 00:29:33.450 Critical Warnings: 00:29:33.450 Available Spare Space: OK 00:29:33.450 Temperature: OK 00:29:33.450 Device Reliability: OK 00:29:33.450 Read Only: No 00:29:33.450 Volatile Memory Backup: OK 00:29:33.450 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:33.450 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:33.450 Available Spare: 0% 00:29:33.450 Available Spare Threshold: 0% 00:29:33.450 Life Percentage Used:[2024-12-14 22:38:54.087041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1761de0) 00:29:33.450 [2024-12-14 22:38:54.087051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.450 [2024-12-14 22:38:54.087063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd9c0, cid 7, qid 0 00:29:33.450 [2024-12-14 22:38:54.087135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.450 [2024-12-14 22:38:54.087141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.450 [2024-12-14 22:38:54.087144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd9c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087175] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:33.450 [2024-12-14 22:38:54.087183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bcf40) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.450 [2024-12-14 22:38:54.087193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd0c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.450 [2024-12-14 22:38:54.087202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd240) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.450 [2024-12-14 22:38:54.087210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.450 [2024-12-14 22:38:54.087220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.450 [2024-12-14 22:38:54.087233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.450 [2024-12-14 22:38:54.087244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.450 [2024-12-14 22:38:54.087304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.450 [2024-12-14 22:38:54.087310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.450 [2024-12-14 22:38:54.087313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.450 [2024-12-14 22:38:54.087334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.450 [2024-12-14 22:38:54.087346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.450 [2024-12-14 22:38:54.087420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.450 [2024-12-14 22:38:54.087426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.450 [2024-12-14 22:38:54.087430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087438] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:33.450 [2024-12-14 22:38:54.087441] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:33.450 [2024-12-14 22:38:54.087449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.450 [2024-12-14 22:38:54.087461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.450 [2024-12-14 22:38:54.087471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.450 [2024-12-14 22:38:54.087531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.450 [2024-12-14 22:38:54.087537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.450 [2024-12-14 22:38:54.087540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.450 [2024-12-14 22:38:54.087551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.450 [2024-12-14 22:38:54.087558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.450 [2024-12-14 22:38:54.087563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.450 [2024-12-14 22:38:54.087572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.450 [2024-12-14 22:38:54.087629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.450 [2024-12-14 22:38:54.087635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.087637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.087649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.087661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.087670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.087731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.087737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.087740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.087751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.087763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.087772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.087830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.087836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.087840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.087850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.087863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.087872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.087939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.087945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.087948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.087960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.087967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.087972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.087983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.451 [2024-12-14 22:38:54.088813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.451 [2024-12-14 22:38:54.088822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.451 [2024-12-14 22:38:54.088881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.451 [2024-12-14 22:38:54.088887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.451 [2024-12-14 22:38:54.088890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.451 [2024-12-14 22:38:54.088893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.451 [2024-12-14 22:38:54.088900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.088908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.088911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.088916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.088926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.088994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.088999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.089823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.089828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.089831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.089843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.089849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.089855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.089864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.093911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.093920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.093923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.093926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.093936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.093942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.093945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1761de0) 00:29:33.452 [2024-12-14 22:38:54.093951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.452 [2024-12-14 22:38:54.093964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17bd3c0, cid 3, qid 0 00:29:33.452 [2024-12-14 22:38:54.094113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:33.452 [2024-12-14 22:38:54.094118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:33.452 [2024-12-14 22:38:54.094121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:33.452 [2024-12-14 22:38:54.094124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17bd3c0) on tqpair=0x1761de0 00:29:33.452 [2024-12-14 22:38:54.094131] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:29:33.452 0% 00:29:33.452 Data Units Read: 0 00:29:33.452 Data Units Written: 0 00:29:33.452 Host Read Commands: 0 00:29:33.452 Host Write Commands: 0 00:29:33.452 Controller Busy Time: 0 minutes 00:29:33.452 Power Cycles: 0 00:29:33.452 Power On Hours: 0 hours 00:29:33.452 Unsafe Shutdowns: 0 00:29:33.452 Unrecoverable Media Errors: 0 00:29:33.452 Lifetime Error Log Entries: 0 00:29:33.452 Warning Temperature Time: 0 minutes 00:29:33.452 Critical Temperature Time: 0 minutes 00:29:33.452 00:29:33.452 Number of Queues 00:29:33.452 ================ 00:29:33.452 Number of I/O Submission Queues: 127 00:29:33.452 Number of I/O Completion Queues: 127 00:29:33.452 00:29:33.452 Active Namespaces 00:29:33.452 ================= 00:29:33.452 Namespace ID:1 00:29:33.453 Error Recovery Timeout: Unlimited 00:29:33.453 Command Set Identifier: NVM (00h) 00:29:33.453 Deallocate: Supported 00:29:33.453 Deallocated/Unwritten Error: Not Supported 00:29:33.453 Deallocated Read Value: Unknown 00:29:33.453 Deallocate in Write Zeroes: Not Supported 00:29:33.453 Deallocated Guard Field: 0xFFFF 00:29:33.453 Flush: Supported 00:29:33.453 Reservation: Supported 00:29:33.453 Namespace Sharing Capabilities: Multiple Controllers 00:29:33.453 Size (in LBAs): 131072 (0GiB) 00:29:33.453 Capacity (in LBAs): 131072 (0GiB) 00:29:33.453 Utilization (in LBAs): 131072 (0GiB) 00:29:33.453 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:33.453 EUI64: ABCDEF0123456789 00:29:33.453 UUID: 72ef498e-1d14-4a82-932c-38f0fc61a004 00:29:33.453 Thin Provisioning: Not Supported 00:29:33.453 Per-NS Atomic Units: Yes 00:29:33.453 Atomic Boundary Size (Normal): 0 00:29:33.453 Atomic Boundary Size (PFail): 0 00:29:33.453 Atomic Boundary Offset: 0 00:29:33.453 Maximum Single Source Range Length: 65535 00:29:33.453 Maximum Copy Length: 65535 00:29:33.453 Maximum Source Range Count: 1 00:29:33.453 NGUID/EUI64 Never Reused: No 00:29:33.453 Namespace Write Protected: No 00:29:33.453 Number of LBA Formats: 1 00:29:33.453 Current LBA Format: LBA Format #00 00:29:33.453 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:33.453 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.453 rmmod nvme_tcp 00:29:33.453 rmmod nvme_fabrics 00:29:33.453 rmmod nvme_keyring 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 449622 ']' 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 449622 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 449622 ']' 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 449622 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449622 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449622' 00:29:33.453 killing process with pid 449622 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 449622 00:29:33.453 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 449622 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.712 22:38:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.616 22:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.616 00:29:35.616 real 0m9.215s 00:29:35.616 user 0m5.445s 00:29:35.616 sys 0m4.771s 00:29:35.616 22:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.875 ************************************ 00:29:35.875 END TEST nvmf_identify 00:29:35.875 ************************************ 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.875 ************************************ 00:29:35.875 START TEST nvmf_perf 00:29:35.875 ************************************ 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:35.875 * Looking for test storage... 00:29:35.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.875 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.876 --rc genhtml_branch_coverage=1 00:29:35.876 --rc genhtml_function_coverage=1 00:29:35.876 --rc genhtml_legend=1 00:29:35.876 --rc geninfo_all_blocks=1 00:29:35.876 --rc geninfo_unexecuted_blocks=1 00:29:35.876 00:29:35.876 ' 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.876 --rc genhtml_branch_coverage=1 00:29:35.876 --rc genhtml_function_coverage=1 00:29:35.876 --rc genhtml_legend=1 00:29:35.876 --rc geninfo_all_blocks=1 00:29:35.876 --rc geninfo_unexecuted_blocks=1 00:29:35.876 00:29:35.876 ' 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.876 --rc genhtml_branch_coverage=1 00:29:35.876 --rc genhtml_function_coverage=1 00:29:35.876 --rc genhtml_legend=1 00:29:35.876 --rc geninfo_all_blocks=1 00:29:35.876 --rc geninfo_unexecuted_blocks=1 00:29:35.876 00:29:35.876 ' 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.876 --rc genhtml_branch_coverage=1 00:29:35.876 --rc genhtml_function_coverage=1 00:29:35.876 --rc genhtml_legend=1 00:29:35.876 --rc geninfo_all_blocks=1 00:29:35.876 --rc geninfo_unexecuted_blocks=1 00:29:35.876 00:29:35.876 ' 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.876 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:36.136 22:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:42.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:42.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.705 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:42.706 Found net devices under 0000:af:00.0: cvl_0_0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:42.706 Found net devices under 0000:af:00.1: cvl_0_1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:29:42.706 00:29:42.706 --- 10.0.0.2 ping statistics --- 00:29:42.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.706 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:42.706 00:29:42.706 --- 10.0.0.1 ping statistics --- 00:29:42.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.706 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=453243 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 453243 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 453243 ']' 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.706 [2024-12-14 22:39:02.688110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:42.706 [2024-12-14 22:39:02.688152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.706 [2024-12-14 22:39:02.764935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.706 [2024-12-14 22:39:02.787901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.706 [2024-12-14 22:39:02.787942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.706 [2024-12-14 22:39:02.787952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.706 [2024-12-14 22:39:02.787958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.706 [2024-12-14 22:39:02.787963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.706 [2024-12-14 22:39:02.789281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.706 [2024-12-14 22:39:02.789387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.706 [2024-12-14 22:39:02.789505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.706 [2024-12-14 22:39:02.789505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:42.706 22:39:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:45.239 22:39:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:45.239 22:39:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:45.497 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:45.497 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:45.756 [2024-12-14 22:39:06.598394] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.756 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.015 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:46.015 22:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.273 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:46.273 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:46.532 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.791 [2024-12-14 22:39:07.433421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.791 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.791 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:46.791 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:46.791 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:46.791 22:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:48.168 Initializing NVMe Controllers 00:29:48.168 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:48.168 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:48.168 Initialization complete. Launching workers. 00:29:48.168 ======================================================== 00:29:48.168 Latency(us) 00:29:48.168 Device Information : IOPS MiB/s Average min max 00:29:48.168 PCIE (0000:5e:00.0) NSID 1 from core 0: 99411.06 388.32 321.36 9.22 7457.35 00:29:48.168 ======================================================== 00:29:48.168 Total : 99411.06 388.32 321.36 9.22 7457.35 00:29:48.168 00:29:48.168 22:39:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.545 Initializing NVMe Controllers 00:29:49.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:49.545 Initialization complete. Launching workers. 00:29:49.545 ======================================================== 00:29:49.545 Latency(us) 00:29:49.545 Device Information : IOPS MiB/s Average min max 00:29:49.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 134.52 0.53 7628.05 104.10 44671.88 00:29:49.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.76 0.26 15681.31 6975.97 47886.40 00:29:49.545 ======================================================== 00:29:49.545 Total : 201.28 0.79 10299.18 104.10 47886.40 00:29:49.545 00:29:49.545 22:39:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.923 Initializing NVMe Controllers 00:29:50.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.923 Initialization complete. Launching workers. 00:29:50.923 ======================================================== 00:29:50.923 Latency(us) 00:29:50.923 Device Information : IOPS MiB/s Average min max 00:29:50.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11210.30 43.79 2855.79 492.37 6708.95 00:29:50.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3780.77 14.77 8506.77 5216.59 17590.68 00:29:50.923 ======================================================== 00:29:50.923 Total : 14991.07 58.56 4280.97 492.37 17590.68 00:29:50.923 00:29:50.923 22:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:50.923 22:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:50.923 22:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.459 Initializing NVMe Controllers 00:29:53.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.459 Controller IO queue size 128, less than required. 00:29:53.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.459 Controller IO queue size 128, less than required. 00:29:53.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:53.459 Initialization complete. Launching workers. 00:29:53.459 ======================================================== 00:29:53.459 Latency(us) 00:29:53.459 Device Information : IOPS MiB/s Average min max 00:29:53.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1819.87 454.97 71656.52 49065.55 125419.81 00:29:53.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.96 152.49 214724.38 92512.60 370383.23 00:29:53.459 ======================================================== 00:29:53.459 Total : 2429.82 607.46 107570.68 49065.55 370383.23 00:29:53.459 00:29:53.459 22:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:53.459 No valid NVMe controllers or AIO or URING devices found 00:29:53.459 Initializing NVMe Controllers 00:29:53.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.459 Controller IO queue size 128, less than required. 00:29:53.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.459 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:53.459 Controller IO queue size 128, less than required. 00:29:53.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.459 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:53.459 WARNING: Some requested NVMe devices were skipped 00:29:53.459 22:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:55.996 Initializing NVMe Controllers 00:29:55.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.996 Controller IO queue size 128, less than required. 00:29:55.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.996 Controller IO queue size 128, less than required. 00:29:55.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:55.996 Initialization complete. Launching workers. 00:29:55.996 00:29:55.996 ==================== 00:29:55.996 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:55.996 TCP transport: 00:29:55.996 polls: 11894 00:29:55.996 idle_polls: 8566 00:29:55.996 sock_completions: 3328 00:29:55.996 nvme_completions: 6327 00:29:55.996 submitted_requests: 9562 00:29:55.996 queued_requests: 1 00:29:55.996 00:29:55.996 ==================== 00:29:55.996 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:55.996 TCP transport: 00:29:55.996 polls: 15728 00:29:55.996 idle_polls: 11533 00:29:55.996 sock_completions: 4195 00:29:55.996 nvme_completions: 6955 00:29:55.996 submitted_requests: 10474 00:29:55.996 queued_requests: 1 00:29:55.996 ======================================================== 00:29:55.996 Latency(us) 00:29:55.996 Device Information : IOPS MiB/s Average min max 00:29:55.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.20 394.55 82541.21 52642.31 129274.31 00:29:55.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1734.87 433.72 74520.71 41667.34 119394.89 00:29:55.996 ======================================================== 00:29:55.996 Total : 3313.07 828.27 78341.32 41667.34 129274.31 00:29:55.996 00:29:55.996 22:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:55.996 22:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.996 22:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:55.996 22:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:55.996 22:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6cee9953-b4ef-4cae-8ea6-38153c6e6a97 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6cee9953-b4ef-4cae-8ea6-38153c6e6a97 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=6cee9953-b4ef-4cae-8ea6-38153c6e6a97 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:59.287 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:59.545 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:59.545 { 00:29:59.545 "uuid": "6cee9953-b4ef-4cae-8ea6-38153c6e6a97", 00:29:59.545 "name": "lvs_0", 00:29:59.545 "base_bdev": "Nvme0n1", 00:29:59.545 "total_data_clusters": 238234, 00:29:59.545 "free_clusters": 238234, 00:29:59.545 "block_size": 512, 00:29:59.545 "cluster_size": 4194304 00:29:59.545 } 00:29:59.545 ]' 00:29:59.545 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6cee9953-b4ef-4cae-8ea6-38153c6e6a97") .free_clusters' 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6cee9953-b4ef-4cae-8ea6-38153c6e6a97") .cluster_size' 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:59.546 952936 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:59.546 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6cee9953-b4ef-4cae-8ea6-38153c6e6a97 lbd_0 20480 00:30:00.113 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=87fd3032-9326-4f14-acb1-54b2403b975b 00:30:00.113 22:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 87fd3032-9326-4f14-acb1-54b2403b975b lvs_n_0 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=98a1a053-b953-4286-b920-3f2c38d72a30 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 98a1a053-b953-4286-b920-3f2c38d72a30 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=98a1a053-b953-4286-b920-3f2c38d72a30 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:00.680 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:00.939 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:00.939 { 00:30:00.939 "uuid": "6cee9953-b4ef-4cae-8ea6-38153c6e6a97", 00:30:00.939 "name": "lvs_0", 00:30:00.939 "base_bdev": "Nvme0n1", 00:30:00.939 "total_data_clusters": 238234, 00:30:00.939 "free_clusters": 233114, 00:30:00.939 "block_size": 512, 00:30:00.939 "cluster_size": 4194304 00:30:00.939 }, 00:30:00.939 { 00:30:00.939 "uuid": "98a1a053-b953-4286-b920-3f2c38d72a30", 00:30:00.939 "name": "lvs_n_0", 00:30:00.939 "base_bdev": "87fd3032-9326-4f14-acb1-54b2403b975b", 00:30:00.939 "total_data_clusters": 5114, 00:30:00.939 "free_clusters": 5114, 00:30:00.939 "block_size": 512, 00:30:00.939 "cluster_size": 4194304 00:30:00.939 } 00:30:00.939 ]' 00:30:00.939 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="98a1a053-b953-4286-b920-3f2c38d72a30") .free_clusters' 00:30:00.939 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:00.939 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="98a1a053-b953-4286-b920-3f2c38d72a30") .cluster_size' 00:30:01.198 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:01.198 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:01.198 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:01.198 20456 00:30:01.198 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:01.198 22:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 98a1a053-b953-4286-b920-3f2c38d72a30 lbd_nest_0 20456 00:30:01.198 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ddc6d030-c2d7-4c5d-aa27-51d0cc907773 00:30:01.198 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.457 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:01.457 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ddc6d030-c2d7-4c5d-aa27-51d0cc907773 00:30:01.715 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.974 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:01.974 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:01.974 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:01.974 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:01.974 22:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.181 Initializing NVMe Controllers 00:30:14.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.181 Initialization complete. Launching workers. 00:30:14.181 ======================================================== 00:30:14.181 Latency(us) 00:30:14.181 Device Information : IOPS MiB/s Average min max 00:30:14.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.20 0.02 20383.47 126.23 47794.38 00:30:14.181 ======================================================== 00:30:14.181 Total : 49.20 0.02 20383.47 126.23 47794.38 00:30:14.181 00:30:14.181 22:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:14.181 22:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.157 Initializing NVMe Controllers 00:30:24.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.157 Initialization complete. Launching workers. 00:30:24.157 ======================================================== 00:30:24.157 Latency(us) 00:30:24.157 Device Information : IOPS MiB/s Average min max 00:30:24.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.70 8.21 15226.81 6986.04 51873.37 00:30:24.157 ======================================================== 00:30:24.157 Total : 65.70 8.21 15226.81 6986.04 51873.37 00:30:24.157 00:30:24.157 22:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:24.157 22:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:24.157 22:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.139 Initializing NVMe Controllers 00:30:34.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.139 Initialization complete. Launching workers. 00:30:34.139 ======================================================== 00:30:34.139 Latency(us) 00:30:34.139 Device Information : IOPS MiB/s Average min max 00:30:34.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8633.04 4.22 3709.45 237.67 45546.60 00:30:34.139 ======================================================== 00:30:34.139 Total : 8633.04 4.22 3709.45 237.67 45546.60 00:30:34.139 00:30:34.139 22:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.139 22:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:44.113 Initializing NVMe Controllers 00:30:44.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.113 Initialization complete. Launching workers. 00:30:44.113 ======================================================== 00:30:44.113 Latency(us) 00:30:44.113 Device Information : IOPS MiB/s Average min max 00:30:44.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4432.25 554.03 7220.04 606.24 16955.78 00:30:44.113 ======================================================== 00:30:44.113 Total : 4432.25 554.03 7220.04 606.24 16955.78 00:30:44.113 00:30:44.113 22:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:44.113 22:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:44.113 22:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.089 Initializing NVMe Controllers 00:30:54.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.089 Controller IO queue size 128, less than required. 00:30:54.089 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:54.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.089 Initialization complete. Launching workers. 00:30:54.089 ======================================================== 00:30:54.089 Latency(us) 00:30:54.089 Device Information : IOPS MiB/s Average min max 00:30:54.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15789.20 7.71 8110.26 1375.00 22569.86 00:30:54.089 ======================================================== 00:30:54.089 Total : 15789.20 7.71 8110.26 1375.00 22569.86 00:30:54.089 00:30:54.089 22:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:54.089 22:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.066 Initializing NVMe Controllers 00:31:04.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.066 Controller IO queue size 128, less than required. 00:31:04.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.066 Initialization complete. Launching workers. 00:31:04.066 ======================================================== 00:31:04.066 Latency(us) 00:31:04.066 Device Information : IOPS MiB/s Average min max 00:31:04.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.75 150.84 106243.53 16833.57 212040.35 00:31:04.066 ======================================================== 00:31:04.066 Total : 1206.75 150.84 106243.53 16833.57 212040.35 00:31:04.066 00:31:04.066 22:40:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.325 22:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ddc6d030-c2d7-4c5d-aa27-51d0cc907773 00:31:04.894 22:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:05.153 22:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87fd3032-9326-4f14-acb1-54b2403b975b 00:31:05.412 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.671 rmmod nvme_tcp 00:31:05.671 rmmod nvme_fabrics 00:31:05.671 rmmod nvme_keyring 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 453243 ']' 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 453243 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 453243 ']' 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 453243 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453243 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453243' 00:31:05.671 killing process with pid 453243 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 453243 00:31:05.671 22:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 453243 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.048 22:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.585 22:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.585 00:31:09.585 real 1m33.410s 00:31:09.585 user 5m33.327s 00:31:09.585 sys 0m17.086s 00:31:09.585 22:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.585 22:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:09.585 ************************************ 00:31:09.585 END TEST nvmf_perf 00:31:09.585 ************************************ 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.585 ************************************ 00:31:09.585 START TEST nvmf_fio_host 00:31:09.585 ************************************ 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:09.585 * Looking for test storage... 00:31:09.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.585 --rc genhtml_branch_coverage=1 00:31:09.585 --rc genhtml_function_coverage=1 00:31:09.585 --rc genhtml_legend=1 00:31:09.585 --rc geninfo_all_blocks=1 00:31:09.585 --rc geninfo_unexecuted_blocks=1 00:31:09.585 00:31:09.585 ' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.585 --rc genhtml_branch_coverage=1 00:31:09.585 --rc genhtml_function_coverage=1 00:31:09.585 --rc genhtml_legend=1 00:31:09.585 --rc geninfo_all_blocks=1 00:31:09.585 --rc geninfo_unexecuted_blocks=1 00:31:09.585 00:31:09.585 ' 00:31:09.585 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.585 --rc genhtml_branch_coverage=1 00:31:09.585 --rc genhtml_function_coverage=1 00:31:09.585 --rc genhtml_legend=1 00:31:09.585 --rc geninfo_all_blocks=1 00:31:09.585 --rc geninfo_unexecuted_blocks=1 00:31:09.586 00:31:09.586 ' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.586 --rc genhtml_branch_coverage=1 00:31:09.586 --rc genhtml_function_coverage=1 00:31:09.586 --rc genhtml_legend=1 00:31:09.586 --rc geninfo_all_blocks=1 00:31:09.586 --rc geninfo_unexecuted_blocks=1 00:31:09.586 00:31:09.586 ' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:09.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.586 22:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.156 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:16.157 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:16.157 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:16.157 Found net devices under 0000:af:00.0: cvl_0_0 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:16.157 Found net devices under 0000:af:00.1: cvl_0_1 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.157 22:40:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:31:16.157 00:31:16.157 --- 10.0.0.2 ping statistics --- 00:31:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.157 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:16.157 00:31:16.157 --- 10.0.0.1 ping statistics --- 00:31:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.157 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=470521 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 470521 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 470521 ']' 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.157 [2024-12-14 22:40:36.244457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:16.157 [2024-12-14 22:40:36.244501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.157 [2024-12-14 22:40:36.318999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.157 [2024-12-14 22:40:36.341884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.157 [2024-12-14 22:40:36.341927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.157 [2024-12-14 22:40:36.341934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.157 [2024-12-14 22:40:36.341939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.157 [2024-12-14 22:40:36.341944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.157 [2024-12-14 22:40:36.343325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.157 [2024-12-14 22:40:36.343434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.157 [2024-12-14 22:40:36.343544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.157 [2024-12-14 22:40:36.343546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:16.157 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:16.158 [2024-12-14 22:40:36.615730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.158 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:16.158 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.158 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.158 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:16.158 Malloc1 00:31:16.158 22:40:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.417 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:16.676 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.676 [2024-12-14 22:40:37.467809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.676 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:16.935 22:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:17.194 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:17.194 fio-3.35 00:31:17.194 Starting 1 thread 00:31:19.727 00:31:19.727 test: (groupid=0, jobs=1): err= 0: pid=471099: Sat Dec 14 22:40:40 2024 00:31:19.727 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec) 00:31:19.728 slat (nsec): min=1532, max=213142, avg=1690.31, stdev=1924.23 00:31:19.728 clat (usec): min=2702, max=10765, avg=5942.54, stdev=467.98 00:31:19.728 lat (usec): min=2733, max=10766, avg=5944.23, stdev=467.88 00:31:19.728 clat percentiles (usec): 00:31:19.728 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:31:19.728 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:31:19.728 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:31:19.728 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 9110], 99.95th=[ 9634], 00:31:19.728 | 99.99th=[10683] 00:31:19.728 bw ( KiB/s): min=46192, max=47928, per=99.95%, avg=47410.00, stdev=823.38, samples=4 00:31:19.728 iops : min=11548, max=11982, avg=11852.50, stdev=205.84, samples=4 00:31:19.728 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(92.4MiB/2005msec); 0 zone resets 00:31:19.728 slat (nsec): min=1563, max=193291, avg=1755.17, stdev=1407.68 00:31:19.728 clat (usec): min=2051, max=9183, avg=4807.37, stdev=374.86 00:31:19.728 lat (usec): min=2064, max=9185, avg=4809.13, stdev=374.82 00:31:19.728 clat percentiles (usec): 00:31:19.728 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:31:19.728 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:19.728 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:31:19.728 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7242], 99.95th=[ 8586], 00:31:19.728 | 99.99th=[ 9110] 00:31:19.728 bw ( KiB/s): min=46696, max=47912, per=100.00%, avg=47222.00, stdev=550.72, samples=4 00:31:19.728 iops : min=11674, max=11978, avg=11805.50, stdev=137.68, samples=4 00:31:19.728 lat (msec) : 4=0.79%, 10=99.19%, 20=0.02% 00:31:19.728 cpu : usr=75.50%, sys=23.55%, ctx=100, majf=0, minf=3 00:31:19.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:19.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:19.728 issued rwts: total=23776,23667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:19.728 00:31:19.728 Run status group 0 (all jobs): 00:31:19.728 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:31:19.728 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=92.4MiB (96.9MB), run=2005-2005msec 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:19.728 22:40:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:19.987 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:19.987 fio-3.35 00:31:19.987 Starting 1 thread 00:31:22.520 00:31:22.520 test: (groupid=0, jobs=1): err= 0: pid=471654: Sat Dec 14 22:40:43 2024 00:31:22.520 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2005msec) 00:31:22.520 slat (nsec): min=2461, max=95094, avg=2800.91, stdev=1418.02 00:31:22.520 clat (usec): min=1823, max=14043, avg=6709.31, stdev=1570.91 00:31:22.520 lat (usec): min=1826, max=14046, avg=6712.11, stdev=1570.99 00:31:22.520 clat percentiles (usec): 00:31:22.520 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:31:22.520 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7177], 00:31:22.520 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9372], 00:31:22.520 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12256], 99.95th=[12387], 00:31:22.520 | 99.99th=[13960] 00:31:22.520 bw ( KiB/s): min=83424, max=96832, per=50.49%, avg=89096.00, stdev=6007.11, samples=4 00:31:22.520 iops : min= 5214, max= 6052, avg=5568.50, stdev=375.44, samples=4 00:31:22.520 write: IOPS=6493, BW=101MiB/s (106MB/s)(182MiB/1797msec); 0 zone resets 00:31:22.520 slat (usec): min=29, max=306, avg=31.51, stdev= 6.67 00:31:22.520 clat (usec): min=2973, max=15833, avg=8560.42, stdev=1513.92 00:31:22.520 lat (usec): min=3003, max=15867, avg=8591.93, stdev=1514.88 00:31:22.520 clat percentiles (usec): 00:31:22.520 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7308], 00:31:22.520 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:22.520 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:31:22.520 | 99.00th=[12518], 99.50th=[12911], 99.90th=[14484], 99.95th=[15139], 00:31:22.520 | 99.99th=[15533] 00:31:22.520 bw ( KiB/s): min=87584, max=100800, per=89.52%, avg=93008.00, stdev=5630.33, samples=4 00:31:22.520 iops : min= 5474, max= 6300, avg=5813.00, stdev=351.90, samples=4 00:31:22.520 lat (msec) : 2=0.01%, 4=1.76%, 10=90.93%, 20=7.31% 00:31:22.520 cpu : usr=84.49%, sys=13.17%, ctx=178, majf=0, minf=3 00:31:22.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:22.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.520 issued rwts: total=22112,11669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.520 00:31:22.520 Run status group 0 (all jobs): 00:31:22.520 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (362MB), run=2005-2005msec 00:31:22.520 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=182MiB (191MB), run=1797-1797msec 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:22.521 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:22.779 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:22.779 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:22.779 22:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:26.069 Nvme0n1 00:31:26.069 22:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=0274ee69-94fb-4c70-abaa-f97633c71653 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 0274ee69-94fb-4c70-abaa-f97633c71653 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=0274ee69-94fb-4c70-abaa-f97633c71653 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:28.602 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.861 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:28.861 { 00:31:28.861 "uuid": "0274ee69-94fb-4c70-abaa-f97633c71653", 00:31:28.861 "name": "lvs_0", 00:31:28.861 "base_bdev": "Nvme0n1", 00:31:28.861 "total_data_clusters": 930, 00:31:28.861 "free_clusters": 930, 00:31:28.861 "block_size": 512, 00:31:28.861 "cluster_size": 1073741824 00:31:28.861 } 00:31:28.861 ]' 00:31:28.861 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0274ee69-94fb-4c70-abaa-f97633c71653") .free_clusters' 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0274ee69-94fb-4c70-abaa-f97633c71653") .cluster_size' 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:28.862 952320 00:31:28.862 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:29.120 b53d479e-269d-42b0-ae96-d287b2dd0cc9 00:31:29.120 22:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:29.379 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:29.637 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:29.896 22:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.155 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:30.155 fio-3.35 00:31:30.155 Starting 1 thread 00:31:32.691 00:31:32.691 test: (groupid=0, jobs=1): err= 0: pid=473363: Sat Dec 14 22:40:53 2024 00:31:32.691 read: IOPS=8088, BW=31.6MiB/s (33.1MB/s)(63.4MiB/2006msec) 00:31:32.691 slat (nsec): min=1526, max=98699, avg=1660.33, stdev=1068.32 00:31:32.691 clat (usec): min=655, max=169954, avg=8696.37, stdev=10251.85 00:31:32.691 lat (usec): min=657, max=169971, avg=8698.03, stdev=10251.99 00:31:32.691 clat percentiles (msec): 00:31:32.691 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:32.691 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:32.691 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:32.691 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:32.691 | 99.99th=[ 171] 00:31:32.691 bw ( KiB/s): min=22952, max=35576, per=99.95%, avg=32338.00, stdev=6258.26, samples=4 00:31:32.691 iops : min= 5738, max= 8894, avg=8084.50, stdev=1564.57, samples=4 00:31:32.691 write: IOPS=8085, BW=31.6MiB/s (33.1MB/s)(63.4MiB/2006msec); 0 zone resets 00:31:32.691 slat (nsec): min=1571, max=76438, avg=1720.26, stdev=686.80 00:31:32.691 clat (usec): min=184, max=168575, avg=7036.02, stdev=9575.54 00:31:32.691 lat (usec): min=185, max=168580, avg=7037.74, stdev=9575.69 00:31:32.691 clat percentiles (msec): 00:31:32.691 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:32.691 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:32.691 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:32.691 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:32.691 | 99.99th=[ 169] 00:31:32.691 bw ( KiB/s): min=23912, max=35200, per=99.86%, avg=32298.00, stdev=5591.42, samples=4 00:31:32.691 iops : min= 5978, max= 8800, avg=8074.50, stdev=1397.85, samples=4 00:31:32.691 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:32.691 lat (msec) : 2=0.05%, 4=0.24%, 10=99.12%, 20=0.18%, 250=0.39% 00:31:32.691 cpu : usr=71.07%, sys=28.23%, ctx=141, majf=0, minf=3 00:31:32.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:32.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.691 issued rwts: total=16225,16220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.691 00:31:32.691 Run status group 0 (all jobs): 00:31:32.691 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.4MiB (66.5MB), run=2006-2006msec 00:31:32.691 WRITE: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.4MiB (66.4MB), run=2006-2006msec 00:31:32.691 22:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:32.691 22:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8321fb7f-53ef-49f3-987f-17b941683f88 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8321fb7f-53ef-49f3-987f-17b941683f88 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=8321fb7f-53ef-49f3-987f-17b941683f88 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:34.069 { 00:31:34.069 "uuid": "0274ee69-94fb-4c70-abaa-f97633c71653", 00:31:34.069 "name": "lvs_0", 00:31:34.069 "base_bdev": "Nvme0n1", 00:31:34.069 "total_data_clusters": 930, 00:31:34.069 "free_clusters": 0, 00:31:34.069 "block_size": 512, 00:31:34.069 "cluster_size": 1073741824 00:31:34.069 }, 00:31:34.069 { 00:31:34.069 "uuid": "8321fb7f-53ef-49f3-987f-17b941683f88", 00:31:34.069 "name": "lvs_n_0", 00:31:34.069 "base_bdev": "b53d479e-269d-42b0-ae96-d287b2dd0cc9", 00:31:34.069 "total_data_clusters": 237847, 00:31:34.069 "free_clusters": 237847, 00:31:34.069 "block_size": 512, 00:31:34.069 "cluster_size": 4194304 00:31:34.069 } 00:31:34.069 ]' 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8321fb7f-53ef-49f3-987f-17b941683f88") .free_clusters' 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8321fb7f-53ef-49f3-987f-17b941683f88") .cluster_size' 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:34.069 951388 00:31:34.069 22:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:34.648 dccf8120-71c7-4ea5-914b-87f9d080e8c9 00:31:34.648 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:34.907 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:34.907 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.166 22:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:35.166 22:40:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.166 22:40:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.166 22:40:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.166 22:40:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.425 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:35.425 fio-3.35 00:31:35.425 Starting 1 thread 00:31:37.961 00:31:37.961 test: (groupid=0, jobs=1): err= 0: pid=474356: Sat Dec 14 22:40:58 2024 00:31:37.961 read: IOPS=7857, BW=30.7MiB/s (32.2MB/s)(61.6MiB/2007msec) 00:31:37.961 slat (nsec): min=1486, max=98289, avg=1714.70, stdev=1367.35 00:31:37.961 clat (usec): min=3060, max=13873, avg=8986.04, stdev=775.16 00:31:37.961 lat (usec): min=3064, max=13875, avg=8987.76, stdev=775.06 00:31:37.961 clat percentiles (usec): 00:31:37.961 | 1.00th=[ 7177], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:31:37.961 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:31:37.961 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:31:37.961 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11731], 99.95th=[12780], 00:31:37.961 | 99.99th=[13042] 00:31:37.961 bw ( KiB/s): min=30224, max=32024, per=99.95%, avg=31416.00, stdev=810.41, samples=4 00:31:37.961 iops : min= 7556, max= 8006, avg=7854.00, stdev=202.60, samples=4 00:31:37.961 write: IOPS=7833, BW=30.6MiB/s (32.1MB/s)(61.4MiB/2007msec); 0 zone resets 00:31:37.961 slat (nsec): min=1528, max=77434, avg=1812.65, stdev=1149.41 00:31:37.961 clat (usec): min=1437, max=12843, avg=7233.93, stdev=656.22 00:31:37.961 lat (usec): min=1441, max=12845, avg=7235.75, stdev=656.16 00:31:37.961 clat percentiles (usec): 00:31:37.961 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:31:37.961 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:31:37.961 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:31:37.961 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11600], 99.95th=[11731], 00:31:37.961 | 99.99th=[12780] 00:31:37.961 bw ( KiB/s): min=31296, max=31360, per=99.94%, avg=31316.00, stdev=30.29, samples=4 00:31:37.961 iops : min= 7824, max= 7840, avg=7829.00, stdev= 7.57, samples=4 00:31:37.961 lat (msec) : 2=0.01%, 4=0.11%, 10=95.60%, 20=4.28% 00:31:37.961 cpu : usr=67.95%, sys=29.36%, ctx=602, majf=0, minf=3 00:31:37.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:37.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.961 issued rwts: total=15771,15722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.961 00:31:37.961 Run status group 0 (all jobs): 00:31:37.961 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.6MiB (64.6MB), run=2007-2007msec 00:31:37.961 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.4MiB (64.4MB), run=2007-2007msec 00:31:37.961 22:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:38.220 22:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:38.220 22:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:42.412 22:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:42.412 22:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:44.946 22:41:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:45.204 22:41:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.109 rmmod nvme_tcp 00:31:47.109 rmmod nvme_fabrics 00:31:47.109 rmmod nvme_keyring 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 470521 ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 470521 ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470521' 00:31:47.109 killing process with pid 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 470521 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.109 22:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.646 00:31:49.646 real 0m39.985s 00:31:49.646 user 2m39.445s 00:31:49.646 sys 0m8.831s 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.646 ************************************ 00:31:49.646 END TEST nvmf_fio_host 00:31:49.646 ************************************ 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.646 ************************************ 00:31:49.646 START TEST nvmf_failover 00:31:49.646 ************************************ 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:49.646 * Looking for test storage... 00:31:49.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.647 --rc genhtml_branch_coverage=1 00:31:49.647 --rc genhtml_function_coverage=1 00:31:49.647 --rc genhtml_legend=1 00:31:49.647 --rc geninfo_all_blocks=1 00:31:49.647 --rc geninfo_unexecuted_blocks=1 00:31:49.647 00:31:49.647 ' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:49.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.647 22:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.219 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.220 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.220 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.220 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.220 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.220 22:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:31:56.220 00:31:56.220 --- 10.0.0.2 ping statistics --- 00:31:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.220 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:31:56.220 00:31:56.220 --- 10.0.0.1 ping statistics --- 00:31:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.220 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=479455 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 479455 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479455 ']' 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.220 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.220 [2024-12-14 22:41:16.258297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:56.220 [2024-12-14 22:41:16.258347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.221 [2024-12-14 22:41:16.334876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.221 [2024-12-14 22:41:16.357447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.221 [2024-12-14 22:41:16.357486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.221 [2024-12-14 22:41:16.357493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.221 [2024-12-14 22:41:16.357500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.221 [2024-12-14 22:41:16.357505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.221 [2024-12-14 22:41:16.358807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.221 [2024-12-14 22:41:16.358979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.221 [2024-12-14 22:41:16.358979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:56.221 [2024-12-14 22:41:16.650258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:56.221 Malloc0 00:31:56.221 22:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.221 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.479 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.737 [2024-12-14 22:41:17.411490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.737 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.737 [2024-12-14 22:41:17.616071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:56.996 [2024-12-14 22:41:17.824786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=479824 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 479824 /var/tmp/bdevperf.sock 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479824 ']' 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.996 22:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.255 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.255 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:57.255 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:57.823 NVMe0n1 00:31:57.823 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:58.081 00:31:58.081 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:58.081 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=479885 00:31:58.081 22:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:59.017 22:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.276 [2024-12-14 22:41:20.072629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.276 [2024-12-14 22:41:20.072972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.072979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.072985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.072992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.072998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 [2024-12-14 22:41:20.073037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afaa0 is same with the state(6) to be set 00:31:59.277 22:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:02.564 22:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:02.822 00:32:02.822 22:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:03.081 [2024-12-14 22:41:23.742949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.742993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.081 [2024-12-14 22:41:23.743290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 [2024-12-14 22:41:23.743542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0fe0 is same with the state(6) to be set 00:32:03.082 22:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:06.373 22:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.373 [2024-12-14 22:41:26.954274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.373 22:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:07.312 22:41:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:07.312 [2024-12-14 22:41:28.168061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.312 [2024-12-14 22:41:28.168335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.313 [2024-12-14 22:41:28.168591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1ea0 is same with the state(6) to be set 00:32:07.572 22:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 479885 00:32:14.146 { 00:32:14.146 "results": [ 00:32:14.146 { 00:32:14.146 "job": "NVMe0n1", 00:32:14.146 "core_mask": "0x1", 00:32:14.146 "workload": "verify", 00:32:14.146 "status": "finished", 00:32:14.146 "verify_range": { 00:32:14.146 "start": 0, 00:32:14.146 "length": 16384 00:32:14.146 }, 00:32:14.146 "queue_depth": 128, 00:32:14.146 "io_size": 4096, 00:32:14.146 "runtime": 15.007471, 00:32:14.146 "iops": 11138.518941665787, 00:32:14.146 "mibps": 43.50983961588198, 00:32:14.146 "io_failed": 16493, 00:32:14.146 "io_timeout": 0, 00:32:14.146 "avg_latency_us": 10436.77057472981, 00:32:14.146 "min_latency_us": 415.45142857142855, 00:32:14.146 "max_latency_us": 22843.977142857144 00:32:14.146 } 00:32:14.146 ], 00:32:14.146 "core_count": 1 00:32:14.146 } 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479824 ']' 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479824' 00:32:14.146 killing process with pid 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479824 00:32:14.146 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:14.146 [2024-12-14 22:41:17.900540] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:14.146 [2024-12-14 22:41:17.900597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479824 ] 00:32:14.146 [2024-12-14 22:41:17.975394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.146 [2024-12-14 22:41:17.997699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.146 Running I/O for 15 seconds... 00:32:14.146 11450.00 IOPS, 44.73 MiB/s [2024-12-14T21:41:35.030Z] [2024-12-14 22:41:20.074098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.146 [2024-12-14 22:41:20.074404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.146 [2024-12-14 22:41:20.074412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.147 [2024-12-14 22:41:20.074630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.147 [2024-12-14 22:41:20.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.147 [2024-12-14 22:41:20.074915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.074983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.074992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.075000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.075009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.075016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.075025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.147 [2024-12-14 22:41:20.075033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.147 [2024-12-14 22:41:20.075042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.148 [2024-12-14 22:41:20.075658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.148 [2024-12-14 22:41:20.075665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.075988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.075996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.149 [2024-12-14 22:41:20.076148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.149 [2024-12-14 22:41:20.076173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.149 [2024-12-14 22:41:20.076178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102616 len:8 PRP1 0x0 PRP2 0x0 00:32:14.149 [2024-12-14 22:41:20.076187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076230] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:14.149 [2024-12-14 22:41:20.076253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.149 [2024-12-14 22:41:20.076261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.149 [2024-12-14 22:41:20.076268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.149 [2024-12-14 22:41:20.076275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:20.076283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.150 [2024-12-14 22:41:20.076290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:20.076298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.150 [2024-12-14 22:41:20.076306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:20.076319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:14.150 [2024-12-14 22:41:20.076347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b3a0 (9): Bad file descriptor 00:32:14.150 [2024-12-14 22:41:20.079132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:14.150 [2024-12-14 22:41:20.115514] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:14.150 11193.00 IOPS, 43.72 MiB/s [2024-12-14T21:41:35.034Z] 11283.33 IOPS, 44.08 MiB/s [2024-12-14T21:41:35.034Z] 11284.75 IOPS, 44.08 MiB/s [2024-12-14T21:41:35.034Z] [2024-12-14 22:41:23.744912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.744946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.744963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.744975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.744984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.744991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.150 [2024-12-14 22:41:23.745320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.150 [2024-12-14 22:41:23.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.150 [2024-12-14 22:41:23.745352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.150 [2024-12-14 22:41:23.745360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.150 [2024-12-14 22:41:23.745366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.151 [2024-12-14 22:41:23.745911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.151 [2024-12-14 22:41:23.745958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.151 [2024-12-14 22:41:23.745966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.745972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.745981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.745987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.745995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.152 [2024-12-14 22:41:23.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.152 [2024-12-14 22:41:23.746538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.153 [2024-12-14 22:41:23.746616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54944 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54960 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54968 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54976 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54992 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55000 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55008 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55016 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55024 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55032 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55040 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.746950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.746955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.746961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55048 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.746967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.759415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.759424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.759432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.153 [2024-12-14 22:41:23.759448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.153 [2024-12-14 22:41:23.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:32:14.153 [2024-12-14 22:41:23.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759509] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:14.153 [2024-12-14 22:41:23.759536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.153 [2024-12-14 22:41:23.759549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.153 [2024-12-14 22:41:23.759569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.153 [2024-12-14 22:41:23.759590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.153 [2024-12-14 22:41:23.759609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.153 [2024-12-14 22:41:23.759619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:14.153 [2024-12-14 22:41:23.759656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b3a0 (9): Bad file descriptor 00:32:14.153 [2024-12-14 22:41:23.763413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:14.153 [2024-12-14 22:41:23.913852] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:14.153 10912.80 IOPS, 42.63 MiB/s [2024-12-14T21:41:35.037Z] 11013.17 IOPS, 43.02 MiB/s [2024-12-14T21:41:35.037Z] 11082.71 IOPS, 43.29 MiB/s [2024-12-14T21:41:35.038Z] 11139.12 IOPS, 43.51 MiB/s [2024-12-14T21:41:35.038Z] 11176.22 IOPS, 43.66 MiB/s [2024-12-14T21:41:35.038Z] [2024-12-14 22:41:28.169979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.154 [2024-12-14 22:41:28.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.154 [2024-12-14 22:41:28.170617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.154 [2024-12-14 22:41:28.170624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.170992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.155 [2024-12-14 22:41:28.171156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.155 [2024-12-14 22:41:28.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.156 [2024-12-14 22:41:28.171221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.156 [2024-12-14 22:41:28.171235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.156 [2024-12-14 22:41:28.171250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.156 [2024-12-14 22:41:28.171720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.156 [2024-12-14 22:41:28.171735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.156 [2024-12-14 22:41:28.171755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.156 [2024-12-14 22:41:28.171762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109024 len:8 PRP1 0x0 PRP2 0x0 00:32:14.156 [2024-12-14 22:41:28.171770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109032 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109040 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109048 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109056 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109064 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109072 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109080 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109088 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.171980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.171985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.171991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109096 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.171998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.172006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.172011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.172016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109104 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.172022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.172029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.172034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109112 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.172053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.172058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.172063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109120 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.172069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.172076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.172082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.182485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109128 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.182497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.157 [2024-12-14 22:41:28.182511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.157 [2024-12-14 22:41:28.182517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109136 len:8 PRP1 0x0 PRP2 0x0 00:32:14.157 [2024-12-14 22:41:28.182524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182566] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:14.157 [2024-12-14 22:41:28.182589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.157 [2024-12-14 22:41:28.182597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.157 [2024-12-14 22:41:28.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.157 [2024-12-14 22:41:28.182629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.157 [2024-12-14 22:41:28.182644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.157 [2024-12-14 22:41:28.182652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:14.157 [2024-12-14 22:41:28.182673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b3a0 (9): Bad file descriptor 00:32:14.157 [2024-12-14 22:41:28.186387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:14.157 [2024-12-14 22:41:28.337411] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:14.157 10999.30 IOPS, 42.97 MiB/s [2024-12-14T21:41:35.041Z] 11028.64 IOPS, 43.08 MiB/s [2024-12-14T21:41:35.041Z] 11064.83 IOPS, 43.22 MiB/s [2024-12-14T21:41:35.041Z] 11079.15 IOPS, 43.28 MiB/s [2024-12-14T21:41:35.041Z] 11104.43 IOPS, 43.38 MiB/s [2024-12-14T21:41:35.041Z] 11139.67 IOPS, 43.51 MiB/s 00:32:14.157 Latency(us) 00:32:14.157 [2024-12-14T21:41:35.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.157 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:14.157 Verification LBA range: start 0x0 length 0x4000 00:32:14.157 NVMe0n1 : 15.01 11138.52 43.51 1098.99 0.00 10436.77 415.45 22843.98 00:32:14.157 [2024-12-14T21:41:35.041Z] =================================================================================================================== 00:32:14.157 [2024-12-14T21:41:35.041Z] Total : 11138.52 43.51 1098.99 0.00 10436.77 415.45 22843.98 00:32:14.157 Received shutdown signal, test time was about 15.000000 seconds 00:32:14.157 00:32:14.157 Latency(us) 00:32:14.157 [2024-12-14T21:41:35.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.157 [2024-12-14T21:41:35.041Z] =================================================================================================================== 00:32:14.157 [2024-12-14T21:41:35.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.157 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=482331 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 482331 /var/tmp/bdevperf.sock 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 482331 ']' 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:14.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:14.158 [2024-12-14 22:41:34.655099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:14.158 [2024-12-14 22:41:34.855676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:14.158 22:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:14.419 NVMe0n1 00:32:14.419 22:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:14.676 00:32:14.677 22:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:15.244 00:32:15.244 22:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:15.244 22:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:15.244 22:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:15.503 22:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:18.811 22:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.811 22:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:18.811 22:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=483228 00:32:18.811 22:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:18.811 22:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 483228 00:32:19.780 { 00:32:19.780 "results": [ 00:32:19.780 { 00:32:19.780 "job": "NVMe0n1", 00:32:19.780 "core_mask": "0x1", 00:32:19.780 "workload": "verify", 00:32:19.780 "status": "finished", 00:32:19.780 "verify_range": { 00:32:19.780 "start": 0, 00:32:19.780 "length": 16384 00:32:19.780 }, 00:32:19.780 "queue_depth": 128, 00:32:19.780 "io_size": 4096, 00:32:19.780 "runtime": 1.009064, 00:32:19.780 "iops": 11677.158237733187, 00:32:19.780 "mibps": 45.61389936614526, 00:32:19.780 "io_failed": 0, 00:32:19.780 "io_timeout": 0, 00:32:19.780 "avg_latency_us": 10920.465738937857, 00:32:19.780 "min_latency_us": 2402.9866666666667, 00:32:19.780 "max_latency_us": 10360.929523809524 00:32:19.780 } 00:32:19.780 ], 00:32:19.780 "core_count": 1 00:32:19.780 } 00:32:19.780 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:19.780 [2024-12-14 22:41:34.290813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:19.780 [2024-12-14 22:41:34.290865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482331 ] 00:32:19.780 [2024-12-14 22:41:34.362260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.780 [2024-12-14 22:41:34.381792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.780 [2024-12-14 22:41:36.248907] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:19.780 [2024-12-14 22:41:36.248950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.780 [2024-12-14 22:41:36.248961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.780 [2024-12-14 22:41:36.248970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.780 [2024-12-14 22:41:36.248977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.780 [2024-12-14 22:41:36.248984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.780 [2024-12-14 22:41:36.248990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.780 [2024-12-14 22:41:36.248998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.780 [2024-12-14 22:41:36.249004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.780 [2024-12-14 22:41:36.249012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:19.780 [2024-12-14 22:41:36.249036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:19.780 [2024-12-14 22:41:36.249049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22443a0 (9): Bad file descriptor 00:32:19.780 [2024-12-14 22:41:36.260559] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:19.780 Running I/O for 1 seconds... 00:32:19.780 11655.00 IOPS, 45.53 MiB/s 00:32:19.780 Latency(us) 00:32:19.780 [2024-12-14T21:41:40.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:19.780 Verification LBA range: start 0x0 length 0x4000 00:32:19.780 NVMe0n1 : 1.01 11677.16 45.61 0.00 0.00 10920.47 2402.99 10360.93 00:32:19.780 [2024-12-14T21:41:40.664Z] =================================================================================================================== 00:32:19.780 [2024-12-14T21:41:40.664Z] Total : 11677.16 45.61 0.00 0.00 10920.47 2402.99 10360.93 00:32:19.780 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:19.780 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:20.048 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.316 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:20.316 22:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:20.316 22:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.588 22:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 482331 ']' 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482331' 00:32:24.017 killing process with pid 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 482331 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:24.017 22:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.360 rmmod nvme_tcp 00:32:24.360 rmmod nvme_fabrics 00:32:24.360 rmmod nvme_keyring 00:32:24.360 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 479455 ']' 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 479455 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479455 ']' 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479455 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479455 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479455' 00:32:24.361 killing process with pid 479455 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479455 00:32:24.361 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479455 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.624 22:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.527 22:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.527 00:32:26.527 real 0m37.280s 00:32:26.527 user 1m58.379s 00:32:26.527 sys 0m7.665s 00:32:26.527 22:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.527 22:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:26.527 ************************************ 00:32:26.527 END TEST nvmf_failover 00:32:26.527 ************************************ 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.787 ************************************ 00:32:26.787 START TEST nvmf_host_discovery 00:32:26.787 ************************************ 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:26.787 * Looking for test storage... 00:32:26.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:26.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.787 --rc genhtml_branch_coverage=1 00:32:26.787 --rc genhtml_function_coverage=1 00:32:26.787 --rc genhtml_legend=1 00:32:26.787 --rc geninfo_all_blocks=1 00:32:26.787 --rc geninfo_unexecuted_blocks=1 00:32:26.787 00:32:26.787 ' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:26.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.787 --rc genhtml_branch_coverage=1 00:32:26.787 --rc genhtml_function_coverage=1 00:32:26.787 --rc genhtml_legend=1 00:32:26.787 --rc geninfo_all_blocks=1 00:32:26.787 --rc geninfo_unexecuted_blocks=1 00:32:26.787 00:32:26.787 ' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:26.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.787 --rc genhtml_branch_coverage=1 00:32:26.787 --rc genhtml_function_coverage=1 00:32:26.787 --rc genhtml_legend=1 00:32:26.787 --rc geninfo_all_blocks=1 00:32:26.787 --rc geninfo_unexecuted_blocks=1 00:32:26.787 00:32:26.787 ' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:26.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.787 --rc genhtml_branch_coverage=1 00:32:26.787 --rc genhtml_function_coverage=1 00:32:26.787 --rc genhtml_legend=1 00:32:26.787 --rc geninfo_all_blocks=1 00:32:26.787 --rc geninfo_unexecuted_blocks=1 00:32:26.787 00:32:26.787 ' 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:26.787 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.788 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.047 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.048 22:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:33.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:33.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:33.617 Found net devices under 0000:af:00.0: cvl_0_0 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:33.617 Found net devices under 0000:af:00.1: cvl_0_1 00:32:33.617 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:32:33.618 00:32:33.618 --- 10.0.0.2 ping statistics --- 00:32:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.618 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:32:33.618 00:32:33.618 --- 10.0.0.1 ping statistics --- 00:32:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.618 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=487603 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 487603 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487603 ']' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 [2024-12-14 22:41:53.573058] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:33.618 [2024-12-14 22:41:53.573100] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.618 [2024-12-14 22:41:53.652342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.618 [2024-12-14 22:41:53.673373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.618 [2024-12-14 22:41:53.673408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.618 [2024-12-14 22:41:53.673415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.618 [2024-12-14 22:41:53.673421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.618 [2024-12-14 22:41:53.673426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.618 [2024-12-14 22:41:53.673897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 [2024-12-14 22:41:53.803457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 [2024-12-14 22:41:53.815616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 null0 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 null1 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=487630 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 487630 /tmp/host.sock 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487630 ']' 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:33.618 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.618 22:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.618 [2024-12-14 22:41:53.893067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:33.619 [2024-12-14 22:41:53.893105] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487630 ] 00:32:33.619 [2024-12-14 22:41:53.965713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.619 [2024-12-14 22:41:53.988103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 [2024-12-14 22:41:54.417889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.619 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:33.878 22:41:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:34.445 [2024-12-14 22:41:55.150058] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:34.445 [2024-12-14 22:41:55.150080] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:34.445 [2024-12-14 22:41:55.150093] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:34.445 [2024-12-14 22:41:55.236343] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:34.703 [2024-12-14 22:41:55.412299] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:34.703 [2024-12-14 22:41:55.413035] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24e0f60:1 started. 00:32:34.703 [2024-12-14 22:41:55.414357] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:34.703 [2024-12-14 22:41:55.414373] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:34.703 [2024-12-14 22:41:55.418822] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24e0f60 was disconnected and freed. delete nvme_qpair. 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:34.962 [2024-12-14 22:41:55.824662] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24cb3c0:1 started. 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.962 [2024-12-14 22:41:55.829688] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24cb3c0 was disconnected and freed. delete nvme_qpair. 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 [2024-12-14 22:41:55.925534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:35.222 [2024-12-14 22:41:55.926416] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:35.222 [2024-12-14 22:41:55.926435] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.222 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.223 22:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.223 [2024-12-14 22:41:56.012668] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.223 [2024-12-14 22:41:56.075218] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:35.223 [2024-12-14 22:41:56.075249] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:35.223 [2024-12-14 22:41:56.075257] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.223 [2024-12-14 22:41:56.075261] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:35.223 22:41:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:36.601 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.602 [2024-12-14 22:41:57.185325] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:36.602 [2024-12-14 22:41:57.185346] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:36.602 [2024-12-14 22:41:57.186745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.602 [2024-12-14 22:41:57.186761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.602 [2024-12-14 22:41:57.186769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.602 [2024-12-14 22:41:57.186776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.602 [2024-12-14 22:41:57.186783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.602 [2024-12-14 22:41:57.186790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.602 [2024-12-14 22:41:57.186797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.602 [2024-12-14 22:41:57.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.602 [2024-12-14 22:41:57.186811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.602 [2024-12-14 22:41:57.196759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.602 [2024-12-14 22:41:57.206793] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.602 [2024-12-14 22:41:57.206805] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.602 [2024-12-14 22:41:57.206811] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.206816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.602 [2024-12-14 22:41:57.206833] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.207093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.602 [2024-12-14 22:41:57.207110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.602 [2024-12-14 22:41:57.207118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.602 [2024-12-14 22:41:57.207130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.602 [2024-12-14 22:41:57.207140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.602 [2024-12-14 22:41:57.207147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.602 [2024-12-14 22:41:57.207155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.602 [2024-12-14 22:41:57.207161] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.602 [2024-12-14 22:41:57.207165] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.602 [2024-12-14 22:41:57.207169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.602 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.602 [2024-12-14 22:41:57.216864] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.602 [2024-12-14 22:41:57.216874] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.602 [2024-12-14 22:41:57.216878] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.216886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.602 [2024-12-14 22:41:57.216899] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.217076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.602 [2024-12-14 22:41:57.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.602 [2024-12-14 22:41:57.217098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.602 [2024-12-14 22:41:57.217109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.602 [2024-12-14 22:41:57.217119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.602 [2024-12-14 22:41:57.217125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.602 [2024-12-14 22:41:57.217133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.602 [2024-12-14 22:41:57.217138] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.602 [2024-12-14 22:41:57.217143] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.602 [2024-12-14 22:41:57.217147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.602 [2024-12-14 22:41:57.226930] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.602 [2024-12-14 22:41:57.226941] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.602 [2024-12-14 22:41:57.226944] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.226949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.602 [2024-12-14 22:41:57.226962] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.227125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.602 [2024-12-14 22:41:57.227137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.602 [2024-12-14 22:41:57.227144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.602 [2024-12-14 22:41:57.227154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.602 [2024-12-14 22:41:57.227164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.602 [2024-12-14 22:41:57.227171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.602 [2024-12-14 22:41:57.227177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.602 [2024-12-14 22:41:57.227183] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.602 [2024-12-14 22:41:57.227187] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.602 [2024-12-14 22:41:57.227191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.602 [2024-12-14 22:41:57.236993] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.602 [2024-12-14 22:41:57.237005] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.602 [2024-12-14 22:41:57.237013] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.237017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.602 [2024-12-14 22:41:57.237031] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.602 [2024-12-14 22:41:57.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.237229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.237237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.237249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 [2024-12-14 22:41:57.237259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.603 [2024-12-14 22:41:57.237266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.603 [2024-12-14 22:41:57.237273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.603 [2024-12-14 22:41:57.237279] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.603 [2024-12-14 22:41:57.237283] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.603 [2024-12-14 22:41:57.237287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.603 [2024-12-14 22:41:57.247061] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.603 [2024-12-14 22:41:57.247075] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.603 [2024-12-14 22:41:57.247082] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.247093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.603 [2024-12-14 22:41:57.247112] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.603 [2024-12-14 22:41:57.247339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.247357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.247366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.247377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.603 [2024-12-14 22:41:57.247386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.603 [2024-12-14 22:41:57.247393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.603 [2024-12-14 22:41:57.247400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.603 [2024-12-14 22:41:57.247405] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.603 [2024-12-14 22:41:57.247411] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.603 [2024-12-14 22:41:57.247415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.603 [2024-12-14 22:41:57.257142] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.603 [2024-12-14 22:41:57.257157] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.603 [2024-12-14 22:41:57.257161] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.257165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.603 [2024-12-14 22:41:57.257181] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.257363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.257378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.257386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.257398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 [2024-12-14 22:41:57.257408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.603 [2024-12-14 22:41:57.257415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.603 [2024-12-14 22:41:57.257422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.603 [2024-12-14 22:41:57.257428] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.603 [2024-12-14 22:41:57.257432] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.603 [2024-12-14 22:41:57.257436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.603 [2024-12-14 22:41:57.267211] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.603 [2024-12-14 22:41:57.267221] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.603 [2024-12-14 22:41:57.267225] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.267229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.603 [2024-12-14 22:41:57.267246] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.267440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.267454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.267461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.267472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 [2024-12-14 22:41:57.267482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.603 [2024-12-14 22:41:57.267489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.603 [2024-12-14 22:41:57.267496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.603 [2024-12-14 22:41:57.267502] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.603 [2024-12-14 22:41:57.267506] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.603 [2024-12-14 22:41:57.267510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.603 [2024-12-14 22:41:57.277276] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.603 [2024-12-14 22:41:57.277289] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.603 [2024-12-14 22:41:57.277293] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.277298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.603 [2024-12-14 22:41:57.277312] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.277544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.277558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.277566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.277577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 [2024-12-14 22:41:57.277594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.603 [2024-12-14 22:41:57.277601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.603 [2024-12-14 22:41:57.277609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.603 [2024-12-14 22:41:57.277615] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.603 [2024-12-14 22:41:57.277620] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.603 [2024-12-14 22:41:57.277624] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.603 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.603 [2024-12-14 22:41:57.287343] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.603 [2024-12-14 22:41:57.287353] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.603 [2024-12-14 22:41:57.287357] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.287366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.603 [2024-12-14 22:41:57.287380] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.603 [2024-12-14 22:41:57.287518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.603 [2024-12-14 22:41:57.287531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.603 [2024-12-14 22:41:57.287538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.603 [2024-12-14 22:41:57.287549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.603 [2024-12-14 22:41:57.287558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.604 [2024-12-14 22:41:57.287565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.604 [2024-12-14 22:41:57.287571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.604 [2024-12-14 22:41:57.287577] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.604 [2024-12-14 22:41:57.287581] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.604 [2024-12-14 22:41:57.287584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.604 [2024-12-14 22:41:57.297410] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.604 [2024-12-14 22:41:57.297421] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.604 [2024-12-14 22:41:57.297425] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.604 [2024-12-14 22:41:57.297429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.604 [2024-12-14 22:41:57.297442] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.604 [2024-12-14 22:41:57.297606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.604 [2024-12-14 22:41:57.297619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.604 [2024-12-14 22:41:57.297627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.604 [2024-12-14 22:41:57.297638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.604 [2024-12-14 22:41:57.297653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.604 [2024-12-14 22:41:57.297660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.604 [2024-12-14 22:41:57.297667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.604 [2024-12-14 22:41:57.297673] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.604 [2024-12-14 22:41:57.297677] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.604 [2024-12-14 22:41:57.297681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.604 [2024-12-14 22:41:57.307473] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:36.604 [2024-12-14 22:41:57.307484] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:36.604 [2024-12-14 22:41:57.307488] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:36.604 [2024-12-14 22:41:57.307492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.604 [2024-12-14 22:41:57.307507] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:36.604 [2024-12-14 22:41:57.307673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.604 [2024-12-14 22:41:57.307687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b2ef0 with addr=10.0.0.2, port=4420 00:32:36.604 [2024-12-14 22:41:57.307695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2ef0 is same with the state(6) to be set 00:32:36.604 [2024-12-14 22:41:57.307706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2ef0 (9): Bad file descriptor 00:32:36.604 [2024-12-14 22:41:57.307715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.604 [2024-12-14 22:41:57.307722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.604 [2024-12-14 22:41:57.307728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.604 [2024-12-14 22:41:57.307734] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:36.604 [2024-12-14 22:41:57.307738] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:36.604 [2024-12-14 22:41:57.307742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.604 [2024-12-14 22:41:57.311273] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:36.604 [2024-12-14 22:41:57.311287] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:36.604 22:41:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.540 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.799 22:41:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.177 [2024-12-14 22:41:59.662389] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.177 [2024-12-14 22:41:59.662406] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.177 [2024-12-14 22:41:59.662416] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.177 [2024-12-14 22:41:59.788787] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:39.177 [2024-12-14 22:41:59.854299] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:39.177 [2024-12-14 22:41:59.854844] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x24dbb90:1 started. 00:32:39.177 [2024-12-14 22:41:59.856346] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.177 [2024-12-14 22:41:59.856369] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.177 [2024-12-14 22:41:59.860232] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x24dbb90 was disconnected and freed. delete nvme_qpair. 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.177 request: 00:32:39.177 { 00:32:39.177 "name": "nvme", 00:32:39.177 "trtype": "tcp", 00:32:39.177 "traddr": "10.0.0.2", 00:32:39.177 "adrfam": "ipv4", 00:32:39.177 "trsvcid": "8009", 00:32:39.177 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.177 "wait_for_attach": true, 00:32:39.177 "method": "bdev_nvme_start_discovery", 00:32:39.177 "req_id": 1 00:32:39.177 } 00:32:39.177 Got JSON-RPC error response 00:32:39.177 response: 00:32:39.177 { 00:32:39.177 "code": -17, 00:32:39.177 "message": "File exists" 00:32:39.177 } 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.177 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.178 request: 00:32:39.178 { 00:32:39.178 "name": "nvme_second", 00:32:39.178 "trtype": "tcp", 00:32:39.178 "traddr": "10.0.0.2", 00:32:39.178 "adrfam": "ipv4", 00:32:39.178 "trsvcid": "8009", 00:32:39.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.178 "wait_for_attach": true, 00:32:39.178 "method": "bdev_nvme_start_discovery", 00:32:39.178 "req_id": 1 00:32:39.178 } 00:32:39.178 Got JSON-RPC error response 00:32:39.178 response: 00:32:39.178 { 00:32:39.178 "code": -17, 00:32:39.178 "message": "File exists" 00:32:39.178 } 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.178 22:41:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.178 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.436 22:42:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.372 [2024-12-14 22:42:01.097439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-12-14 22:42:01.097469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9150 with addr=10.0.0.2, port=8010 00:32:40.372 [2024-12-14 22:42:01.097491] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:40.372 [2024-12-14 22:42:01.097498] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:40.372 [2024-12-14 22:42:01.097505] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:41.307 [2024-12-14 22:42:02.099783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.307 [2024-12-14 22:42:02.099811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e8310 with addr=10.0.0.2, port=8010 00:32:41.307 [2024-12-14 22:42:02.099827] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:41.307 [2024-12-14 22:42:02.099833] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:41.307 [2024-12-14 22:42:02.099840] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:42.244 [2024-12-14 22:42:03.102013] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:42.244 request: 00:32:42.244 { 00:32:42.244 "name": "nvme_second", 00:32:42.244 "trtype": "tcp", 00:32:42.244 "traddr": "10.0.0.2", 00:32:42.244 "adrfam": "ipv4", 00:32:42.244 "trsvcid": "8010", 00:32:42.244 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.244 "wait_for_attach": false, 00:32:42.244 "attach_timeout_ms": 3000, 00:32:42.244 "method": "bdev_nvme_start_discovery", 00:32:42.244 "req_id": 1 00:32:42.244 } 00:32:42.244 Got JSON-RPC error response 00:32:42.244 response: 00:32:42.244 { 00:32:42.244 "code": -110, 00:32:42.244 "message": "Connection timed out" 00:32:42.244 } 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.244 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 487630 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.503 rmmod nvme_tcp 00:32:42.503 rmmod nvme_fabrics 00:32:42.503 rmmod nvme_keyring 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 487603 ']' 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 487603 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 487603 ']' 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 487603 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487603 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487603' 00:32:42.503 killing process with pid 487603 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 487603 00:32:42.503 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 487603 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.762 22:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.665 00:32:44.665 real 0m18.043s 00:32:44.665 user 0m22.450s 00:32:44.665 sys 0m5.763s 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.665 ************************************ 00:32:44.665 END TEST nvmf_host_discovery 00:32:44.665 ************************************ 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.665 22:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.924 ************************************ 00:32:44.924 START TEST nvmf_host_multipath_status 00:32:44.924 ************************************ 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:44.924 * Looking for test storage... 00:32:44.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:44.924 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.925 --rc genhtml_branch_coverage=1 00:32:44.925 --rc genhtml_function_coverage=1 00:32:44.925 --rc genhtml_legend=1 00:32:44.925 --rc geninfo_all_blocks=1 00:32:44.925 --rc geninfo_unexecuted_blocks=1 00:32:44.925 00:32:44.925 ' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.925 --rc genhtml_branch_coverage=1 00:32:44.925 --rc genhtml_function_coverage=1 00:32:44.925 --rc genhtml_legend=1 00:32:44.925 --rc geninfo_all_blocks=1 00:32:44.925 --rc geninfo_unexecuted_blocks=1 00:32:44.925 00:32:44.925 ' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.925 --rc genhtml_branch_coverage=1 00:32:44.925 --rc genhtml_function_coverage=1 00:32:44.925 --rc genhtml_legend=1 00:32:44.925 --rc geninfo_all_blocks=1 00:32:44.925 --rc geninfo_unexecuted_blocks=1 00:32:44.925 00:32:44.925 ' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.925 --rc genhtml_branch_coverage=1 00:32:44.925 --rc genhtml_function_coverage=1 00:32:44.925 --rc genhtml_legend=1 00:32:44.925 --rc geninfo_all_blocks=1 00:32:44.925 --rc geninfo_unexecuted_blocks=1 00:32:44.925 00:32:44.925 ' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.925 22:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:51.497 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:51.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:51.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:51.498 Found net devices under 0000:af:00.0: cvl_0_0 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:51.498 Found net devices under 0000:af:00.1: cvl_0_1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:32:51.498 00:32:51.498 --- 10.0.0.2 ping statistics --- 00:32:51.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.498 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:32:51.498 00:32:51.498 --- 10.0.0.1 ping statistics --- 00:32:51.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.498 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=493149 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 493149 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493149 ']' 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.498 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:51.498 [2024-12-14 22:42:11.734371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:51.498 [2024-12-14 22:42:11.734411] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.498 [2024-12-14 22:42:11.810672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:51.498 [2024-12-14 22:42:11.832528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.498 [2024-12-14 22:42:11.832565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.498 [2024-12-14 22:42:11.832572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.498 [2024-12-14 22:42:11.832578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.498 [2024-12-14 22:42:11.832584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.498 [2024-12-14 22:42:11.833682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.498 [2024-12-14 22:42:11.833683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=493149 00:32:51.499 22:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:51.499 [2024-12-14 22:42:12.137204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.499 22:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:51.499 Malloc0 00:32:51.757 22:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:51.757 22:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:52.015 22:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.274 [2024-12-14 22:42:12.931551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.274 22:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:52.274 [2024-12-14 22:42:13.128052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=493427 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 493427 /var/tmp/bdevperf.sock 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493427 ']' 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:52.533 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:52.792 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.359 Nvme0n1 00:32:53.359 22:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.618 Nvme0n1 00:32:53.618 22:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:53.618 22:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:55.526 22:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:55.526 22:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:55.785 22:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.043 22:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:56.980 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:56.980 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:56.980 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.980 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.239 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.239 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.239 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.239 22:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.498 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.498 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.498 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.498 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.757 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.016 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.016 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.016 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.016 22:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.274 22:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.274 22:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:58.274 22:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.533 22:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:58.792 22:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:59.728 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:59.728 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:59.728 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.728 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.987 22:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.245 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.245 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:00.245 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.245 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:00.504 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.504 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:00.504 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.504 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.763 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.763 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.763 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.763 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.022 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.022 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:01.022 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:01.281 22:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:01.281 22:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.658 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.918 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.918 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.918 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.918 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.177 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.177 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:03.177 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.177 22:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.435 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.435 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.435 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.435 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.695 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.695 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:03.695 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.954 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.954 22:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:05.330 22:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:05.331 22:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.331 22:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.331 22:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.331 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.331 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:05.331 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.331 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.590 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.849 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.849 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.849 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.849 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.107 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.107 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:06.107 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.107 22:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.366 22:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.366 22:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:06.366 22:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:06.625 22:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.625 22:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.002 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.261 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.261 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.261 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.261 22:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.261 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.261 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.261 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.261 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.520 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.520 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:08.520 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.520 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.779 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.779 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.779 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.779 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:09.037 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.038 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:09.038 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:09.038 22:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:09.296 22:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:10.231 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:10.231 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.231 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.231 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.490 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.490 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.490 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.490 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.748 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.748 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.748 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.748 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:11.006 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.006 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:11.006 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.006 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.264 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.264 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:11.264 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.264 22:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.264 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.264 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.264 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.264 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.523 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.523 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:11.781 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:11.781 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:12.040 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.299 22:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:13.235 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:13.235 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.235 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.235 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.494 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.494 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:13.494 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.494 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.752 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.011 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.011 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.011 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.011 22:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.269 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.269 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.269 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.269 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.527 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.527 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:14.527 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:14.785 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:14.785 22:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.160 22:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.419 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.419 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.419 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:16.419 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.678 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.936 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.936 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:16.936 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.936 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.195 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.195 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:17.195 22:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:17.453 22:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:17.453 22:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.830 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.089 22:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.347 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.347 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.347 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.347 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.606 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.606 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.606 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.606 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.865 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.865 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:19.865 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:20.124 22:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:20.382 22:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:21.319 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:21.319 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.319 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.319 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.577 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.577 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:21.578 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.578 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.836 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.095 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.095 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:22.095 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.095 22:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.353 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.353 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:22.353 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.353 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 493427 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493427 ']' 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493427 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493427 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493427' 00:33:22.612 killing process with pid 493427 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493427 00:33:22.612 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493427 00:33:22.612 { 00:33:22.612 "results": [ 00:33:22.612 { 00:33:22.612 "job": "Nvme0n1", 00:33:22.612 "core_mask": "0x4", 00:33:22.612 "workload": "verify", 00:33:22.612 "status": "terminated", 00:33:22.612 "verify_range": { 00:33:22.612 "start": 0, 00:33:22.612 "length": 16384 00:33:22.612 }, 00:33:22.612 "queue_depth": 128, 00:33:22.612 "io_size": 4096, 00:33:22.612 "runtime": 28.956745, 00:33:22.612 "iops": 10632.168774494508, 00:33:22.612 "mibps": 41.53190927536917, 00:33:22.612 "io_failed": 0, 00:33:22.612 "io_timeout": 0, 00:33:22.612 "avg_latency_us": 12018.892877987879, 00:33:22.612 "min_latency_us": 438.85714285714283, 00:33:22.612 "max_latency_us": 3019898.88 00:33:22.612 } 00:33:22.612 ], 00:33:22.612 "core_count": 1 00:33:22.612 } 00:33:22.874 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 493427 00:33:22.874 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:22.874 [2024-12-14 22:42:13.204720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:22.874 [2024-12-14 22:42:13.204772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493427 ] 00:33:22.874 [2024-12-14 22:42:13.280102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.874 [2024-12-14 22:42:13.302409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.874 Running I/O for 90 seconds... 00:33:22.874 11340.00 IOPS, 44.30 MiB/s [2024-12-14T21:42:43.758Z] 11478.50 IOPS, 44.84 MiB/s [2024-12-14T21:42:43.758Z] 11378.33 IOPS, 44.45 MiB/s [2024-12-14T21:42:43.758Z] 11413.50 IOPS, 44.58 MiB/s [2024-12-14T21:42:43.758Z] 11435.20 IOPS, 44.67 MiB/s [2024-12-14T21:42:43.758Z] 11433.00 IOPS, 44.66 MiB/s [2024-12-14T21:42:43.758Z] 11419.57 IOPS, 44.61 MiB/s [2024-12-14T21:42:43.758Z] 11396.88 IOPS, 44.52 MiB/s [2024-12-14T21:42:43.758Z] 11414.78 IOPS, 44.59 MiB/s [2024-12-14T21:42:43.758Z] 11427.40 IOPS, 44.64 MiB/s [2024-12-14T21:42:43.758Z] 11444.09 IOPS, 44.70 MiB/s [2024-12-14T21:42:43.758Z] 11437.08 IOPS, 44.68 MiB/s [2024-12-14T21:42:43.758Z] [2024-12-14 22:42:27.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.874 [2024-12-14 22:42:27.259842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:22.874 [2024-12-14 22:42:27.259876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.874 [2024-12-14 22:42:27.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:22.874 [2024-12-14 22:42:27.259898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.874 [2024-12-14 22:42:27.259911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:22.874 [2024-12-14 22:42:27.259924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.874 [2024-12-14 22:42:27.259930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:22.874 [2024-12-14 22:42:27.259943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.259951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.259962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.259981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.259989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.260982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.260995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.261014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.261021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.261035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.875 [2024-12-14 22:42:27.261042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.875 [2024-12-14 22:42:27.261055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.261992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.261999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.262015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.262038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.876 [2024-12-14 22:42:27.262045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:22.876 [2024-12-14 22:42:27.262062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.876 [2024-12-14 22:42:27.262070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:27.262835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:27.262959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:27.262969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:22.877 11281.77 IOPS, 44.07 MiB/s [2024-12-14T21:42:43.761Z] 10475.93 IOPS, 40.92 MiB/s [2024-12-14T21:42:43.761Z] 9777.53 IOPS, 38.19 MiB/s [2024-12-14T21:42:43.761Z] 9286.44 IOPS, 36.28 MiB/s [2024-12-14T21:42:43.761Z] 9409.41 IOPS, 36.76 MiB/s [2024-12-14T21:42:43.761Z] 9524.94 IOPS, 37.21 MiB/s [2024-12-14T21:42:43.761Z] 9697.53 IOPS, 37.88 MiB/s [2024-12-14T21:42:43.761Z] 9893.50 IOPS, 38.65 MiB/s [2024-12-14T21:42:43.761Z] 10066.67 IOPS, 39.32 MiB/s [2024-12-14T21:42:43.761Z] 10127.95 IOPS, 39.56 MiB/s [2024-12-14T21:42:43.761Z] 10184.74 IOPS, 39.78 MiB/s [2024-12-14T21:42:43.761Z] 10243.29 IOPS, 40.01 MiB/s [2024-12-14T21:42:43.761Z] 10375.80 IOPS, 40.53 MiB/s [2024-12-14T21:42:43.761Z] 10488.27 IOPS, 40.97 MiB/s [2024-12-14T21:42:43.761Z] [2024-12-14 22:42:41.048295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.877 [2024-12-14 22:42:41.048331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:22.877 [2024-12-14 22:42:41.048364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.877 [2024-12-14 22:42:41.048373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.878 [2024-12-14 22:42:41.048787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.878 [2024-12-14 22:42:41.048806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.048941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.048948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.049353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.049368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.049392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.049405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.878 [2024-12-14 22:42:41.049412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.878 [2024-12-14 22:42:41.050542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.878 [2024-12-14 22:42:41.050550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.050569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.050608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.050986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.050994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.051013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.051032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.051052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.051071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.051091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.051110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.879 [2024-12-14 22:42:41.051128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:22.879 [2024-12-14 22:42:41.051141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.879 [2024-12-14 22:42:41.051148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:22.879 10567.70 IOPS, 41.28 MiB/s [2024-12-14T21:42:43.763Z] 10605.18 IOPS, 41.43 MiB/s [2024-12-14T21:42:43.763Z] Received shutdown signal, test time was about 28.957378 seconds 00:33:22.879 00:33:22.879 Latency(us) 00:33:22.879 [2024-12-14T21:42:43.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.879 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:22.879 Verification LBA range: start 0x0 length 0x4000 00:33:22.879 Nvme0n1 : 28.96 10632.17 41.53 0.00 0.00 12018.89 438.86 3019898.88 00:33:22.879 [2024-12-14T21:42:43.763Z] =================================================================================================================== 00:33:22.879 [2024-12-14T21:42:43.763Z] Total : 10632.17 41.53 0.00 0.00 12018.89 438.86 3019898.88 00:33:22.879 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:23.139 rmmod nvme_tcp 00:33:23.139 rmmod nvme_fabrics 00:33:23.139 rmmod nvme_keyring 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 493149 ']' 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 493149 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493149 ']' 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493149 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493149 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493149' 00:33:23.139 killing process with pid 493149 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493149 00:33:23.139 22:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493149 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.398 22:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:25.303 00:33:25.303 real 0m40.544s 00:33:25.303 user 1m50.298s 00:33:25.303 sys 0m11.398s 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.303 ************************************ 00:33:25.303 END TEST nvmf_host_multipath_status 00:33:25.303 ************************************ 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.303 22:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.562 ************************************ 00:33:25.562 START TEST nvmf_discovery_remove_ifc 00:33:25.562 ************************************ 00:33:25.562 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:25.562 * Looking for test storage... 00:33:25.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.562 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:25.562 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:25.562 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:25.562 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.563 --rc genhtml_branch_coverage=1 00:33:25.563 --rc genhtml_function_coverage=1 00:33:25.563 --rc genhtml_legend=1 00:33:25.563 --rc geninfo_all_blocks=1 00:33:25.563 --rc geninfo_unexecuted_blocks=1 00:33:25.563 00:33:25.563 ' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.563 --rc genhtml_branch_coverage=1 00:33:25.563 --rc genhtml_function_coverage=1 00:33:25.563 --rc genhtml_legend=1 00:33:25.563 --rc geninfo_all_blocks=1 00:33:25.563 --rc geninfo_unexecuted_blocks=1 00:33:25.563 00:33:25.563 ' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.563 --rc genhtml_branch_coverage=1 00:33:25.563 --rc genhtml_function_coverage=1 00:33:25.563 --rc genhtml_legend=1 00:33:25.563 --rc geninfo_all_blocks=1 00:33:25.563 --rc geninfo_unexecuted_blocks=1 00:33:25.563 00:33:25.563 ' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.563 --rc genhtml_branch_coverage=1 00:33:25.563 --rc genhtml_function_coverage=1 00:33:25.563 --rc genhtml_legend=1 00:33:25.563 --rc geninfo_all_blocks=1 00:33:25.563 --rc geninfo_unexecuted_blocks=1 00:33:25.563 00:33:25.563 ' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:25.563 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.564 22:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:32.132 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:32.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:32.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:32.133 Found net devices under 0000:af:00.0: cvl_0_0 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:32.133 Found net devices under 0000:af:00.1: cvl_0_1 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.133 22:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:32.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:33:32.133 00:33:32.133 --- 10.0.0.2 ping statistics --- 00:33:32.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.133 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:33:32.133 00:33:32.133 --- 10.0.0.1 ping statistics --- 00:33:32.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.133 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=501944 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 501944 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501944 ']' 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.133 [2024-12-14 22:42:52.293968] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:32.133 [2024-12-14 22:42:52.294015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.133 [2024-12-14 22:42:52.373091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.133 [2024-12-14 22:42:52.394209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.133 [2024-12-14 22:42:52.394244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.133 [2024-12-14 22:42:52.394251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.133 [2024-12-14 22:42:52.394257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.133 [2024-12-14 22:42:52.394262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.133 [2024-12-14 22:42:52.394752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:32.133 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.134 [2024-12-14 22:42:52.529462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.134 [2024-12-14 22:42:52.537614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:32.134 null0 00:33:32.134 [2024-12-14 22:42:52.569609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=501967 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 501967 /tmp/host.sock 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501967 ']' 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:32.134 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.134 [2024-12-14 22:42:52.637082] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:32.134 [2024-12-14 22:42:52.637122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501967 ] 00:33:32.134 [2024-12-14 22:42:52.710167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.134 [2024-12-14 22:42:52.732877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.134 22:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.070 [2024-12-14 22:42:53.926388] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:33.070 [2024-12-14 22:42:53.926412] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:33.070 [2024-12-14 22:42:53.926425] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:33.329 [2024-12-14 22:42:54.053800] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:33.329 [2024-12-14 22:42:54.115407] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:33.329 [2024-12-14 22:42:54.116130] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1365b50:1 started. 00:33:33.329 [2024-12-14 22:42:54.117432] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:33.329 [2024-12-14 22:42:54.117471] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:33.329 [2024-12-14 22:42:54.117490] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:33.329 [2024-12-14 22:42:54.117501] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:33.329 [2024-12-14 22:42:54.117517] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.329 [2024-12-14 22:42:54.124819] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1365b50 was disconnected and freed. delete nvme_qpair. 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:33.329 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.588 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.589 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:33.589 22:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.524 22:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.900 22:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.837 22:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.774 22:42:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.711 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.711 [2024-12-14 22:42:59.559261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:38.711 [2024-12-14 22:42:59.559296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.711 [2024-12-14 22:42:59.559308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.711 [2024-12-14 22:42:59.559317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.711 [2024-12-14 22:42:59.559324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.712 [2024-12-14 22:42:59.559332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.712 [2024-12-14 22:42:59.559338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.712 [2024-12-14 22:42:59.559345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.712 [2024-12-14 22:42:59.559352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.712 [2024-12-14 22:42:59.559358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.712 [2024-12-14 22:42:59.559365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.712 [2024-12-14 22:42:59.559371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342290 is same with the state(6) to be set 00:33:38.712 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.712 [2024-12-14 22:42:59.569283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342290 (9): Bad file descriptor 00:33:38.712 22:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.712 [2024-12-14 22:42:59.579318] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:38.712 [2024-12-14 22:42:59.579330] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:38.712 [2024-12-14 22:42:59.579336] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:38.712 [2024-12-14 22:42:59.579344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:38.712 [2024-12-14 22:42:59.579364] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.088 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.088 [2024-12-14 22:43:00.594962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:40.088 [2024-12-14 22:43:00.595047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1342290 with addr=10.0.0.2, port=4420 00:33:40.088 [2024-12-14 22:43:00.595082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342290 is same with the state(6) to be set 00:33:40.088 [2024-12-14 22:43:00.595141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342290 (9): Bad file descriptor 00:33:40.088 [2024-12-14 22:43:00.596129] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:40.088 [2024-12-14 22:43:00.596197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:40.088 [2024-12-14 22:43:00.596222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:40.088 [2024-12-14 22:43:00.596246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:40.088 [2024-12-14 22:43:00.596268] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:40.088 [2024-12-14 22:43:00.596284] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:40.088 [2024-12-14 22:43:00.596298] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:40.089 [2024-12-14 22:43:00.596321] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:40.089 [2024-12-14 22:43:00.596336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:40.089 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.089 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.089 22:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.026 [2024-12-14 22:43:01.598849] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:41.026 [2024-12-14 22:43:01.598870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:41.026 [2024-12-14 22:43:01.598883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:41.026 [2024-12-14 22:43:01.598890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:41.026 [2024-12-14 22:43:01.598897] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:41.026 [2024-12-14 22:43:01.598908] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:41.026 [2024-12-14 22:43:01.598918] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:41.026 [2024-12-14 22:43:01.598922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:41.026 [2024-12-14 22:43:01.598943] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:41.026 [2024-12-14 22:43:01.598964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.026 [2024-12-14 22:43:01.598975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.026 [2024-12-14 22:43:01.598984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.026 [2024-12-14 22:43:01.598991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.026 [2024-12-14 22:43:01.598998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.026 [2024-12-14 22:43:01.599005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.026 [2024-12-14 22:43:01.599012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.026 [2024-12-14 22:43:01.599018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.026 [2024-12-14 22:43:01.599025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.026 [2024-12-14 22:43:01.599032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.026 [2024-12-14 22:43:01.599038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:41.026 [2024-12-14 22:43:01.599357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13319e0 (9): Bad file descriptor 00:33:41.026 [2024-12-14 22:43:01.600369] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:41.026 [2024-12-14 22:43:01.600380] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:41.026 22:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.961 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.220 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.220 22:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.788 [2024-12-14 22:43:03.651375] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:42.788 [2024-12-14 22:43:03.651392] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:42.788 [2024-12-14 22:43:03.651403] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.046 [2024-12-14 22:43:03.779803] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.046 22:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.304 [2024-12-14 22:43:03.960753] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:43.304 [2024-12-14 22:43:03.961211] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1344540:1 started. 00:33:43.304 [2024-12-14 22:43:03.962232] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:43.304 [2024-12-14 22:43:03.962262] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:43.304 [2024-12-14 22:43:03.962278] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:43.304 [2024-12-14 22:43:03.962289] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:43.304 [2024-12-14 22:43:03.962296] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:43.304 [2024-12-14 22:43:03.969796] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1344540 was disconnected and freed. delete nvme_qpair. 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 501967 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501967 ']' 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501967 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.236 22:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501967 00:33:44.236 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:44.236 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:44.236 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501967' 00:33:44.236 killing process with pid 501967 00:33:44.236 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501967 00:33:44.236 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501967 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:44.495 rmmod nvme_tcp 00:33:44.495 rmmod nvme_fabrics 00:33:44.495 rmmod nvme_keyring 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 501944 ']' 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 501944 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501944 ']' 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501944 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501944 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501944' 00:33:44.495 killing process with pid 501944 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501944 00:33:44.495 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501944 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.753 22:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.658 00:33:46.658 real 0m21.306s 00:33:46.658 user 0m26.577s 00:33:46.658 sys 0m5.854s 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.658 ************************************ 00:33:46.658 END TEST nvmf_discovery_remove_ifc 00:33:46.658 ************************************ 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.658 22:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.918 ************************************ 00:33:46.918 START TEST nvmf_identify_kernel_target 00:33:46.918 ************************************ 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.918 * Looking for test storage... 00:33:46.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:46.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.918 --rc genhtml_branch_coverage=1 00:33:46.918 --rc genhtml_function_coverage=1 00:33:46.918 --rc genhtml_legend=1 00:33:46.918 --rc geninfo_all_blocks=1 00:33:46.918 --rc geninfo_unexecuted_blocks=1 00:33:46.918 00:33:46.918 ' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:46.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.918 --rc genhtml_branch_coverage=1 00:33:46.918 --rc genhtml_function_coverage=1 00:33:46.918 --rc genhtml_legend=1 00:33:46.918 --rc geninfo_all_blocks=1 00:33:46.918 --rc geninfo_unexecuted_blocks=1 00:33:46.918 00:33:46.918 ' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:46.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.918 --rc genhtml_branch_coverage=1 00:33:46.918 --rc genhtml_function_coverage=1 00:33:46.918 --rc genhtml_legend=1 00:33:46.918 --rc geninfo_all_blocks=1 00:33:46.918 --rc geninfo_unexecuted_blocks=1 00:33:46.918 00:33:46.918 ' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:46.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.918 --rc genhtml_branch_coverage=1 00:33:46.918 --rc genhtml_function_coverage=1 00:33:46.918 --rc genhtml_legend=1 00:33:46.918 --rc geninfo_all_blocks=1 00:33:46.918 --rc geninfo_unexecuted_blocks=1 00:33:46.918 00:33:46.918 ' 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.918 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.919 22:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:53.490 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:53.490 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:53.490 Found net devices under 0000:af:00.0: cvl_0_0 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:53.490 Found net devices under 0000:af:00.1: cvl_0_1 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.490 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:33:53.491 00:33:53.491 --- 10.0.0.2 ping statistics --- 00:33:53.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.491 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:33:53.491 00:33:53.491 --- 10.0.0.1 ping statistics --- 00:33:53.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.491 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:53.491 22:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.029 Waiting for block devices as requested 00:33:56.029 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:56.029 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.029 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:56.029 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:56.029 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:56.029 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:56.029 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:56.288 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:56.288 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:56.288 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.547 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:56.547 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:56.547 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:56.807 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:56.807 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:56.807 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:56.807 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:57.074 No valid GPT data, bailing 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:57.074 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:57.075 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:57.075 00:33:57.075 Discovery Log Number of Records 2, Generation counter 2 00:33:57.075 =====Discovery Log Entry 0====== 00:33:57.075 trtype: tcp 00:33:57.075 adrfam: ipv4 00:33:57.075 subtype: current discovery subsystem 00:33:57.075 treq: not specified, sq flow control disable supported 00:33:57.075 portid: 1 00:33:57.075 trsvcid: 4420 00:33:57.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:57.075 traddr: 10.0.0.1 00:33:57.075 eflags: none 00:33:57.075 sectype: none 00:33:57.075 =====Discovery Log Entry 1====== 00:33:57.075 trtype: tcp 00:33:57.075 adrfam: ipv4 00:33:57.075 subtype: nvme subsystem 00:33:57.075 treq: not specified, sq flow control disable supported 00:33:57.075 portid: 1 00:33:57.075 trsvcid: 4420 00:33:57.075 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:57.075 traddr: 10.0.0.1 00:33:57.075 eflags: none 00:33:57.075 sectype: none 00:33:57.075 22:43:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:57.075 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:57.335 ===================================================== 00:33:57.335 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:57.335 ===================================================== 00:33:57.335 Controller Capabilities/Features 00:33:57.335 ================================ 00:33:57.335 Vendor ID: 0000 00:33:57.335 Subsystem Vendor ID: 0000 00:33:57.335 Serial Number: f209db16bc5b99a3cb18 00:33:57.335 Model Number: Linux 00:33:57.335 Firmware Version: 6.8.9-20 00:33:57.335 Recommended Arb Burst: 0 00:33:57.335 IEEE OUI Identifier: 00 00 00 00:33:57.335 Multi-path I/O 00:33:57.335 May have multiple subsystem ports: No 00:33:57.335 May have multiple controllers: No 00:33:57.335 Associated with SR-IOV VF: No 00:33:57.335 Max Data Transfer Size: Unlimited 00:33:57.335 Max Number of Namespaces: 0 00:33:57.335 Max Number of I/O Queues: 1024 00:33:57.335 NVMe Specification Version (VS): 1.3 00:33:57.335 NVMe Specification Version (Identify): 1.3 00:33:57.335 Maximum Queue Entries: 1024 00:33:57.335 Contiguous Queues Required: No 00:33:57.335 Arbitration Mechanisms Supported 00:33:57.335 Weighted Round Robin: Not Supported 00:33:57.335 Vendor Specific: Not Supported 00:33:57.335 Reset Timeout: 7500 ms 00:33:57.335 Doorbell Stride: 4 bytes 00:33:57.335 NVM Subsystem Reset: Not Supported 00:33:57.335 Command Sets Supported 00:33:57.335 NVM Command Set: Supported 00:33:57.335 Boot Partition: Not Supported 00:33:57.335 Memory Page Size Minimum: 4096 bytes 00:33:57.335 Memory Page Size Maximum: 4096 bytes 00:33:57.335 Persistent Memory Region: Not Supported 00:33:57.335 Optional Asynchronous Events Supported 00:33:57.335 Namespace Attribute Notices: Not Supported 00:33:57.335 Firmware Activation Notices: Not Supported 00:33:57.335 ANA Change Notices: Not Supported 00:33:57.335 PLE Aggregate Log Change Notices: Not Supported 00:33:57.335 LBA Status Info Alert Notices: Not Supported 00:33:57.335 EGE Aggregate Log Change Notices: Not Supported 00:33:57.335 Normal NVM Subsystem Shutdown event: Not Supported 00:33:57.335 Zone Descriptor Change Notices: Not Supported 00:33:57.335 Discovery Log Change Notices: Supported 00:33:57.335 Controller Attributes 00:33:57.335 128-bit Host Identifier: Not Supported 00:33:57.335 Non-Operational Permissive Mode: Not Supported 00:33:57.335 NVM Sets: Not Supported 00:33:57.335 Read Recovery Levels: Not Supported 00:33:57.335 Endurance Groups: Not Supported 00:33:57.335 Predictable Latency Mode: Not Supported 00:33:57.335 Traffic Based Keep ALive: Not Supported 00:33:57.335 Namespace Granularity: Not Supported 00:33:57.335 SQ Associations: Not Supported 00:33:57.335 UUID List: Not Supported 00:33:57.335 Multi-Domain Subsystem: Not Supported 00:33:57.335 Fixed Capacity Management: Not Supported 00:33:57.335 Variable Capacity Management: Not Supported 00:33:57.335 Delete Endurance Group: Not Supported 00:33:57.335 Delete NVM Set: Not Supported 00:33:57.335 Extended LBA Formats Supported: Not Supported 00:33:57.335 Flexible Data Placement Supported: Not Supported 00:33:57.335 00:33:57.335 Controller Memory Buffer Support 00:33:57.335 ================================ 00:33:57.335 Supported: No 00:33:57.335 00:33:57.335 Persistent Memory Region Support 00:33:57.335 ================================ 00:33:57.335 Supported: No 00:33:57.335 00:33:57.335 Admin Command Set Attributes 00:33:57.335 ============================ 00:33:57.335 Security Send/Receive: Not Supported 00:33:57.335 Format NVM: Not Supported 00:33:57.335 Firmware Activate/Download: Not Supported 00:33:57.335 Namespace Management: Not Supported 00:33:57.335 Device Self-Test: Not Supported 00:33:57.335 Directives: Not Supported 00:33:57.335 NVMe-MI: Not Supported 00:33:57.335 Virtualization Management: Not Supported 00:33:57.335 Doorbell Buffer Config: Not Supported 00:33:57.335 Get LBA Status Capability: Not Supported 00:33:57.335 Command & Feature Lockdown Capability: Not Supported 00:33:57.335 Abort Command Limit: 1 00:33:57.335 Async Event Request Limit: 1 00:33:57.335 Number of Firmware Slots: N/A 00:33:57.335 Firmware Slot 1 Read-Only: N/A 00:33:57.335 Firmware Activation Without Reset: N/A 00:33:57.335 Multiple Update Detection Support: N/A 00:33:57.335 Firmware Update Granularity: No Information Provided 00:33:57.335 Per-Namespace SMART Log: No 00:33:57.335 Asymmetric Namespace Access Log Page: Not Supported 00:33:57.335 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:57.335 Command Effects Log Page: Not Supported 00:33:57.335 Get Log Page Extended Data: Supported 00:33:57.335 Telemetry Log Pages: Not Supported 00:33:57.335 Persistent Event Log Pages: Not Supported 00:33:57.335 Supported Log Pages Log Page: May Support 00:33:57.335 Commands Supported & Effects Log Page: Not Supported 00:33:57.335 Feature Identifiers & Effects Log Page:May Support 00:33:57.335 NVMe-MI Commands & Effects Log Page: May Support 00:33:57.335 Data Area 4 for Telemetry Log: Not Supported 00:33:57.335 Error Log Page Entries Supported: 1 00:33:57.335 Keep Alive: Not Supported 00:33:57.335 00:33:57.335 NVM Command Set Attributes 00:33:57.335 ========================== 00:33:57.335 Submission Queue Entry Size 00:33:57.335 Max: 1 00:33:57.335 Min: 1 00:33:57.335 Completion Queue Entry Size 00:33:57.335 Max: 1 00:33:57.335 Min: 1 00:33:57.335 Number of Namespaces: 0 00:33:57.335 Compare Command: Not Supported 00:33:57.335 Write Uncorrectable Command: Not Supported 00:33:57.335 Dataset Management Command: Not Supported 00:33:57.335 Write Zeroes Command: Not Supported 00:33:57.335 Set Features Save Field: Not Supported 00:33:57.335 Reservations: Not Supported 00:33:57.335 Timestamp: Not Supported 00:33:57.335 Copy: Not Supported 00:33:57.335 Volatile Write Cache: Not Present 00:33:57.335 Atomic Write Unit (Normal): 1 00:33:57.335 Atomic Write Unit (PFail): 1 00:33:57.335 Atomic Compare & Write Unit: 1 00:33:57.335 Fused Compare & Write: Not Supported 00:33:57.335 Scatter-Gather List 00:33:57.335 SGL Command Set: Supported 00:33:57.335 SGL Keyed: Not Supported 00:33:57.335 SGL Bit Bucket Descriptor: Not Supported 00:33:57.335 SGL Metadata Pointer: Not Supported 00:33:57.335 Oversized SGL: Not Supported 00:33:57.335 SGL Metadata Address: Not Supported 00:33:57.335 SGL Offset: Supported 00:33:57.335 Transport SGL Data Block: Not Supported 00:33:57.335 Replay Protected Memory Block: Not Supported 00:33:57.335 00:33:57.335 Firmware Slot Information 00:33:57.335 ========================= 00:33:57.335 Active slot: 0 00:33:57.335 00:33:57.335 00:33:57.335 Error Log 00:33:57.335 ========= 00:33:57.335 00:33:57.335 Active Namespaces 00:33:57.335 ================= 00:33:57.335 Discovery Log Page 00:33:57.335 ================== 00:33:57.335 Generation Counter: 2 00:33:57.335 Number of Records: 2 00:33:57.335 Record Format: 0 00:33:57.335 00:33:57.335 Discovery Log Entry 0 00:33:57.335 ---------------------- 00:33:57.335 Transport Type: 3 (TCP) 00:33:57.335 Address Family: 1 (IPv4) 00:33:57.335 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:57.335 Entry Flags: 00:33:57.335 Duplicate Returned Information: 0 00:33:57.335 Explicit Persistent Connection Support for Discovery: 0 00:33:57.335 Transport Requirements: 00:33:57.335 Secure Channel: Not Specified 00:33:57.335 Port ID: 1 (0x0001) 00:33:57.335 Controller ID: 65535 (0xffff) 00:33:57.335 Admin Max SQ Size: 32 00:33:57.335 Transport Service Identifier: 4420 00:33:57.335 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:57.335 Transport Address: 10.0.0.1 00:33:57.335 Discovery Log Entry 1 00:33:57.335 ---------------------- 00:33:57.335 Transport Type: 3 (TCP) 00:33:57.335 Address Family: 1 (IPv4) 00:33:57.335 Subsystem Type: 2 (NVM Subsystem) 00:33:57.335 Entry Flags: 00:33:57.335 Duplicate Returned Information: 0 00:33:57.335 Explicit Persistent Connection Support for Discovery: 0 00:33:57.335 Transport Requirements: 00:33:57.335 Secure Channel: Not Specified 00:33:57.335 Port ID: 1 (0x0001) 00:33:57.335 Controller ID: 65535 (0xffff) 00:33:57.335 Admin Max SQ Size: 32 00:33:57.335 Transport Service Identifier: 4420 00:33:57.335 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:57.335 Transport Address: 10.0.0.1 00:33:57.335 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:57.335 get_feature(0x01) failed 00:33:57.335 get_feature(0x02) failed 00:33:57.336 get_feature(0x04) failed 00:33:57.336 ===================================================== 00:33:57.336 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:57.336 ===================================================== 00:33:57.336 Controller Capabilities/Features 00:33:57.336 ================================ 00:33:57.336 Vendor ID: 0000 00:33:57.336 Subsystem Vendor ID: 0000 00:33:57.336 Serial Number: 83916e3cbfec76dee58e 00:33:57.336 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:57.336 Firmware Version: 6.8.9-20 00:33:57.336 Recommended Arb Burst: 6 00:33:57.336 IEEE OUI Identifier: 00 00 00 00:33:57.336 Multi-path I/O 00:33:57.336 May have multiple subsystem ports: Yes 00:33:57.336 May have multiple controllers: Yes 00:33:57.336 Associated with SR-IOV VF: No 00:33:57.336 Max Data Transfer Size: Unlimited 00:33:57.336 Max Number of Namespaces: 1024 00:33:57.336 Max Number of I/O Queues: 128 00:33:57.336 NVMe Specification Version (VS): 1.3 00:33:57.336 NVMe Specification Version (Identify): 1.3 00:33:57.336 Maximum Queue Entries: 1024 00:33:57.336 Contiguous Queues Required: No 00:33:57.336 Arbitration Mechanisms Supported 00:33:57.336 Weighted Round Robin: Not Supported 00:33:57.336 Vendor Specific: Not Supported 00:33:57.336 Reset Timeout: 7500 ms 00:33:57.336 Doorbell Stride: 4 bytes 00:33:57.336 NVM Subsystem Reset: Not Supported 00:33:57.336 Command Sets Supported 00:33:57.336 NVM Command Set: Supported 00:33:57.336 Boot Partition: Not Supported 00:33:57.336 Memory Page Size Minimum: 4096 bytes 00:33:57.336 Memory Page Size Maximum: 4096 bytes 00:33:57.336 Persistent Memory Region: Not Supported 00:33:57.336 Optional Asynchronous Events Supported 00:33:57.336 Namespace Attribute Notices: Supported 00:33:57.336 Firmware Activation Notices: Not Supported 00:33:57.336 ANA Change Notices: Supported 00:33:57.336 PLE Aggregate Log Change Notices: Not Supported 00:33:57.336 LBA Status Info Alert Notices: Not Supported 00:33:57.336 EGE Aggregate Log Change Notices: Not Supported 00:33:57.336 Normal NVM Subsystem Shutdown event: Not Supported 00:33:57.336 Zone Descriptor Change Notices: Not Supported 00:33:57.336 Discovery Log Change Notices: Not Supported 00:33:57.336 Controller Attributes 00:33:57.336 128-bit Host Identifier: Supported 00:33:57.336 Non-Operational Permissive Mode: Not Supported 00:33:57.336 NVM Sets: Not Supported 00:33:57.336 Read Recovery Levels: Not Supported 00:33:57.336 Endurance Groups: Not Supported 00:33:57.336 Predictable Latency Mode: Not Supported 00:33:57.336 Traffic Based Keep ALive: Supported 00:33:57.336 Namespace Granularity: Not Supported 00:33:57.336 SQ Associations: Not Supported 00:33:57.336 UUID List: Not Supported 00:33:57.336 Multi-Domain Subsystem: Not Supported 00:33:57.336 Fixed Capacity Management: Not Supported 00:33:57.336 Variable Capacity Management: Not Supported 00:33:57.336 Delete Endurance Group: Not Supported 00:33:57.336 Delete NVM Set: Not Supported 00:33:57.336 Extended LBA Formats Supported: Not Supported 00:33:57.336 Flexible Data Placement Supported: Not Supported 00:33:57.336 00:33:57.336 Controller Memory Buffer Support 00:33:57.336 ================================ 00:33:57.336 Supported: No 00:33:57.336 00:33:57.336 Persistent Memory Region Support 00:33:57.336 ================================ 00:33:57.336 Supported: No 00:33:57.336 00:33:57.336 Admin Command Set Attributes 00:33:57.336 ============================ 00:33:57.336 Security Send/Receive: Not Supported 00:33:57.336 Format NVM: Not Supported 00:33:57.336 Firmware Activate/Download: Not Supported 00:33:57.336 Namespace Management: Not Supported 00:33:57.336 Device Self-Test: Not Supported 00:33:57.336 Directives: Not Supported 00:33:57.336 NVMe-MI: Not Supported 00:33:57.336 Virtualization Management: Not Supported 00:33:57.336 Doorbell Buffer Config: Not Supported 00:33:57.336 Get LBA Status Capability: Not Supported 00:33:57.336 Command & Feature Lockdown Capability: Not Supported 00:33:57.336 Abort Command Limit: 4 00:33:57.336 Async Event Request Limit: 4 00:33:57.336 Number of Firmware Slots: N/A 00:33:57.336 Firmware Slot 1 Read-Only: N/A 00:33:57.336 Firmware Activation Without Reset: N/A 00:33:57.336 Multiple Update Detection Support: N/A 00:33:57.336 Firmware Update Granularity: No Information Provided 00:33:57.336 Per-Namespace SMART Log: Yes 00:33:57.336 Asymmetric Namespace Access Log Page: Supported 00:33:57.336 ANA Transition Time : 10 sec 00:33:57.336 00:33:57.336 Asymmetric Namespace Access Capabilities 00:33:57.336 ANA Optimized State : Supported 00:33:57.336 ANA Non-Optimized State : Supported 00:33:57.336 ANA Inaccessible State : Supported 00:33:57.336 ANA Persistent Loss State : Supported 00:33:57.336 ANA Change State : Supported 00:33:57.336 ANAGRPID is not changed : No 00:33:57.336 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:57.336 00:33:57.336 ANA Group Identifier Maximum : 128 00:33:57.336 Number of ANA Group Identifiers : 128 00:33:57.336 Max Number of Allowed Namespaces : 1024 00:33:57.336 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:57.336 Command Effects Log Page: Supported 00:33:57.336 Get Log Page Extended Data: Supported 00:33:57.336 Telemetry Log Pages: Not Supported 00:33:57.336 Persistent Event Log Pages: Not Supported 00:33:57.336 Supported Log Pages Log Page: May Support 00:33:57.336 Commands Supported & Effects Log Page: Not Supported 00:33:57.336 Feature Identifiers & Effects Log Page:May Support 00:33:57.336 NVMe-MI Commands & Effects Log Page: May Support 00:33:57.336 Data Area 4 for Telemetry Log: Not Supported 00:33:57.336 Error Log Page Entries Supported: 128 00:33:57.336 Keep Alive: Supported 00:33:57.336 Keep Alive Granularity: 1000 ms 00:33:57.336 00:33:57.336 NVM Command Set Attributes 00:33:57.336 ========================== 00:33:57.336 Submission Queue Entry Size 00:33:57.336 Max: 64 00:33:57.336 Min: 64 00:33:57.336 Completion Queue Entry Size 00:33:57.336 Max: 16 00:33:57.336 Min: 16 00:33:57.336 Number of Namespaces: 1024 00:33:57.336 Compare Command: Not Supported 00:33:57.336 Write Uncorrectable Command: Not Supported 00:33:57.336 Dataset Management Command: Supported 00:33:57.336 Write Zeroes Command: Supported 00:33:57.336 Set Features Save Field: Not Supported 00:33:57.336 Reservations: Not Supported 00:33:57.336 Timestamp: Not Supported 00:33:57.336 Copy: Not Supported 00:33:57.336 Volatile Write Cache: Present 00:33:57.336 Atomic Write Unit (Normal): 1 00:33:57.336 Atomic Write Unit (PFail): 1 00:33:57.336 Atomic Compare & Write Unit: 1 00:33:57.336 Fused Compare & Write: Not Supported 00:33:57.336 Scatter-Gather List 00:33:57.336 SGL Command Set: Supported 00:33:57.336 SGL Keyed: Not Supported 00:33:57.336 SGL Bit Bucket Descriptor: Not Supported 00:33:57.336 SGL Metadata Pointer: Not Supported 00:33:57.336 Oversized SGL: Not Supported 00:33:57.336 SGL Metadata Address: Not Supported 00:33:57.336 SGL Offset: Supported 00:33:57.336 Transport SGL Data Block: Not Supported 00:33:57.336 Replay Protected Memory Block: Not Supported 00:33:57.336 00:33:57.336 Firmware Slot Information 00:33:57.336 ========================= 00:33:57.336 Active slot: 0 00:33:57.336 00:33:57.336 Asymmetric Namespace Access 00:33:57.336 =========================== 00:33:57.336 Change Count : 0 00:33:57.336 Number of ANA Group Descriptors : 1 00:33:57.336 ANA Group Descriptor : 0 00:33:57.336 ANA Group ID : 1 00:33:57.336 Number of NSID Values : 1 00:33:57.336 Change Count : 0 00:33:57.336 ANA State : 1 00:33:57.336 Namespace Identifier : 1 00:33:57.336 00:33:57.336 Commands Supported and Effects 00:33:57.336 ============================== 00:33:57.336 Admin Commands 00:33:57.336 -------------- 00:33:57.336 Get Log Page (02h): Supported 00:33:57.336 Identify (06h): Supported 00:33:57.336 Abort (08h): Supported 00:33:57.336 Set Features (09h): Supported 00:33:57.336 Get Features (0Ah): Supported 00:33:57.336 Asynchronous Event Request (0Ch): Supported 00:33:57.336 Keep Alive (18h): Supported 00:33:57.336 I/O Commands 00:33:57.336 ------------ 00:33:57.336 Flush (00h): Supported 00:33:57.336 Write (01h): Supported LBA-Change 00:33:57.336 Read (02h): Supported 00:33:57.336 Write Zeroes (08h): Supported LBA-Change 00:33:57.336 Dataset Management (09h): Supported 00:33:57.336 00:33:57.336 Error Log 00:33:57.336 ========= 00:33:57.336 Entry: 0 00:33:57.336 Error Count: 0x3 00:33:57.336 Submission Queue Id: 0x0 00:33:57.336 Command Id: 0x5 00:33:57.336 Phase Bit: 0 00:33:57.336 Status Code: 0x2 00:33:57.336 Status Code Type: 0x0 00:33:57.336 Do Not Retry: 1 00:33:57.336 Error Location: 0x28 00:33:57.336 LBA: 0x0 00:33:57.336 Namespace: 0x0 00:33:57.336 Vendor Log Page: 0x0 00:33:57.336 ----------- 00:33:57.336 Entry: 1 00:33:57.336 Error Count: 0x2 00:33:57.336 Submission Queue Id: 0x0 00:33:57.336 Command Id: 0x5 00:33:57.336 Phase Bit: 0 00:33:57.336 Status Code: 0x2 00:33:57.336 Status Code Type: 0x0 00:33:57.336 Do Not Retry: 1 00:33:57.336 Error Location: 0x28 00:33:57.337 LBA: 0x0 00:33:57.337 Namespace: 0x0 00:33:57.337 Vendor Log Page: 0x0 00:33:57.337 ----------- 00:33:57.337 Entry: 2 00:33:57.337 Error Count: 0x1 00:33:57.337 Submission Queue Id: 0x0 00:33:57.337 Command Id: 0x4 00:33:57.337 Phase Bit: 0 00:33:57.337 Status Code: 0x2 00:33:57.337 Status Code Type: 0x0 00:33:57.337 Do Not Retry: 1 00:33:57.337 Error Location: 0x28 00:33:57.337 LBA: 0x0 00:33:57.337 Namespace: 0x0 00:33:57.337 Vendor Log Page: 0x0 00:33:57.337 00:33:57.337 Number of Queues 00:33:57.337 ================ 00:33:57.337 Number of I/O Submission Queues: 128 00:33:57.337 Number of I/O Completion Queues: 128 00:33:57.337 00:33:57.337 ZNS Specific Controller Data 00:33:57.337 ============================ 00:33:57.337 Zone Append Size Limit: 0 00:33:57.337 00:33:57.337 00:33:57.337 Active Namespaces 00:33:57.337 ================= 00:33:57.337 get_feature(0x05) failed 00:33:57.337 Namespace ID:1 00:33:57.337 Command Set Identifier: NVM (00h) 00:33:57.337 Deallocate: Supported 00:33:57.337 Deallocated/Unwritten Error: Not Supported 00:33:57.337 Deallocated Read Value: Unknown 00:33:57.337 Deallocate in Write Zeroes: Not Supported 00:33:57.337 Deallocated Guard Field: 0xFFFF 00:33:57.337 Flush: Supported 00:33:57.337 Reservation: Not Supported 00:33:57.337 Namespace Sharing Capabilities: Multiple Controllers 00:33:57.337 Size (in LBAs): 1953525168 (931GiB) 00:33:57.337 Capacity (in LBAs): 1953525168 (931GiB) 00:33:57.337 Utilization (in LBAs): 1953525168 (931GiB) 00:33:57.337 UUID: 955c1a07-8721-4434-a06a-205ee8395f65 00:33:57.337 Thin Provisioning: Not Supported 00:33:57.337 Per-NS Atomic Units: Yes 00:33:57.337 Atomic Boundary Size (Normal): 0 00:33:57.337 Atomic Boundary Size (PFail): 0 00:33:57.337 Atomic Boundary Offset: 0 00:33:57.337 NGUID/EUI64 Never Reused: No 00:33:57.337 ANA group ID: 1 00:33:57.337 Namespace Write Protected: No 00:33:57.337 Number of LBA Formats: 1 00:33:57.337 Current LBA Format: LBA Format #00 00:33:57.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:57.337 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.337 rmmod nvme_tcp 00:33:57.337 rmmod nvme_fabrics 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.337 22:43:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:59.871 22:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:02.406 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:02.406 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:03.344 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:03.344 00:34:03.344 real 0m16.568s 00:34:03.344 user 0m4.369s 00:34:03.344 sys 0m8.617s 00:34:03.344 22:43:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.344 22:43:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.344 ************************************ 00:34:03.344 END TEST nvmf_identify_kernel_target 00:34:03.344 ************************************ 00:34:03.344 22:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:03.344 22:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:03.345 22:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.345 22:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.345 ************************************ 00:34:03.345 START TEST nvmf_auth_host 00:34:03.345 ************************************ 00:34:03.345 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:03.604 * Looking for test storage... 00:34:03.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.605 --rc genhtml_branch_coverage=1 00:34:03.605 --rc genhtml_function_coverage=1 00:34:03.605 --rc genhtml_legend=1 00:34:03.605 --rc geninfo_all_blocks=1 00:34:03.605 --rc geninfo_unexecuted_blocks=1 00:34:03.605 00:34:03.605 ' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.605 --rc genhtml_branch_coverage=1 00:34:03.605 --rc genhtml_function_coverage=1 00:34:03.605 --rc genhtml_legend=1 00:34:03.605 --rc geninfo_all_blocks=1 00:34:03.605 --rc geninfo_unexecuted_blocks=1 00:34:03.605 00:34:03.605 ' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.605 --rc genhtml_branch_coverage=1 00:34:03.605 --rc genhtml_function_coverage=1 00:34:03.605 --rc genhtml_legend=1 00:34:03.605 --rc geninfo_all_blocks=1 00:34:03.605 --rc geninfo_unexecuted_blocks=1 00:34:03.605 00:34:03.605 ' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:03.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.605 --rc genhtml_branch_coverage=1 00:34:03.605 --rc genhtml_function_coverage=1 00:34:03.605 --rc genhtml_legend=1 00:34:03.605 --rc geninfo_all_blocks=1 00:34:03.605 --rc geninfo_unexecuted_blocks=1 00:34:03.605 00:34:03.605 ' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:03.605 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.606 22:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.173 22:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:10.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:10.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:10.173 Found net devices under 0000:af:00.0: cvl_0_0 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:10.173 Found net devices under 0000:af:00.1: cvl_0_1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.173 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:34:10.174 00:34:10.174 --- 10.0.0.2 ping statistics --- 00:34:10.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.174 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:34:10.174 00:34:10.174 --- 10.0.0.1 ping statistics --- 00:34:10.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.174 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=513750 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 513750 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513750 ']' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b372327ff413d4cfb065b67f5e81bb2 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.x1v 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b372327ff413d4cfb065b67f5e81bb2 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b372327ff413d4cfb065b67f5e81bb2 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b372327ff413d4cfb065b67f5e81bb2 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.x1v 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.x1v 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x1v 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d58b9313792a330e50f61560d4d1ed74763a3444eb6eb8cd89cbb85229156f1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WYc 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d58b9313792a330e50f61560d4d1ed74763a3444eb6eb8cd89cbb85229156f1 3 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d58b9313792a330e50f61560d4d1ed74763a3444eb6eb8cd89cbb85229156f1 3 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d58b9313792a330e50f61560d4d1ed74763a3444eb6eb8cd89cbb85229156f1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WYc 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WYc 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.WYc 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5c116a32661a730518118dcedc09b866db252f314dbc0026 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AkH 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5c116a32661a730518118dcedc09b866db252f314dbc0026 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5c116a32661a730518118dcedc09b866db252f314dbc0026 0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5c116a32661a730518118dcedc09b866db252f314dbc0026 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AkH 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AkH 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AkH 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a52d9738d8cb129f984739dd4d4a00e06632d9d64783321 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9Aq 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a52d9738d8cb129f984739dd4d4a00e06632d9d64783321 2 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a52d9738d8cb129f984739dd4d4a00e06632d9d64783321 2 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a52d9738d8cb129f984739dd4d4a00e06632d9d64783321 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.174 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9Aq 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9Aq 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9Aq 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25c765286918fd12b159b103a0de7a1e 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2oR 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25c765286918fd12b159b103a0de7a1e 1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25c765286918fd12b159b103a0de7a1e 1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25c765286918fd12b159b103a0de7a1e 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2oR 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2oR 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2oR 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=038b96b8f157c16bb1bca21200059c92 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9Ee 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 038b96b8f157c16bb1bca21200059c92 1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 038b96b8f157c16bb1bca21200059c92 1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=038b96b8f157c16bb1bca21200059c92 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9Ee 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9Ee 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Ee 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89f9f2d8b7ee510f0e28d9f123f583bf4afbdd6556822c7c 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hB5 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89f9f2d8b7ee510f0e28d9f123f583bf4afbdd6556822c7c 2 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89f9f2d8b7ee510f0e28d9f123f583bf4afbdd6556822c7c 2 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89f9f2d8b7ee510f0e28d9f123f583bf4afbdd6556822c7c 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hB5 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hB5 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hB5 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:10.175 22:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16d81cc4159d858299fd33040b9e9d51 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wNj 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16d81cc4159d858299fd33040b9e9d51 0 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16d81cc4159d858299fd33040b9e9d51 0 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16d81cc4159d858299fd33040b9e9d51 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wNj 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wNj 00:34:10.175 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wNj 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=30865693b6e0311a1d00dbe55855292b3bb4738374dfad0fe60b55bb3c3b8071 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qRx 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 30865693b6e0311a1d00dbe55855292b3bb4738374dfad0fe60b55bb3c3b8071 3 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 30865693b6e0311a1d00dbe55855292b3bb4738374dfad0fe60b55bb3c3b8071 3 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=30865693b6e0311a1d00dbe55855292b3bb4738374dfad0fe60b55bb3c3b8071 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qRx 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qRx 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qRx 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 513750 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513750 ']' 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x1v 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.WYc ]] 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WYc 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AkH 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.435 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9Aq ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9Aq 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2oR 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Ee ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Ee 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hB5 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wNj ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wNj 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qRx 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:10.695 22:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:13.230 Waiting for block devices as requested 00:34:13.230 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:13.488 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.488 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.488 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:13.747 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:13.747 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:13.747 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.747 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.005 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:14.005 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.005 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.005 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.264 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.264 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.264 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.264 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.522 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:15.090 No valid GPT data, bailing 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:15.090 00:34:15.090 Discovery Log Number of Records 2, Generation counter 2 00:34:15.090 =====Discovery Log Entry 0====== 00:34:15.090 trtype: tcp 00:34:15.090 adrfam: ipv4 00:34:15.090 subtype: current discovery subsystem 00:34:15.090 treq: not specified, sq flow control disable supported 00:34:15.090 portid: 1 00:34:15.090 trsvcid: 4420 00:34:15.090 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:15.090 traddr: 10.0.0.1 00:34:15.090 eflags: none 00:34:15.090 sectype: none 00:34:15.090 =====Discovery Log Entry 1====== 00:34:15.090 trtype: tcp 00:34:15.090 adrfam: ipv4 00:34:15.090 subtype: nvme subsystem 00:34:15.090 treq: not specified, sq flow control disable supported 00:34:15.090 portid: 1 00:34:15.090 trsvcid: 4420 00:34:15.090 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:15.090 traddr: 10.0.0.1 00:34:15.090 eflags: none 00:34:15.090 sectype: none 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.090 22:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.349 nvme0n1 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.349 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.608 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.609 nvme0n1 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.609 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.868 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.869 nvme0n1 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.869 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.128 nvme0n1 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.128 22:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.387 nvme0n1 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.387 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.648 nvme0n1 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.648 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.908 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.909 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.909 nvme0n1 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.167 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.168 22:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.168 nvme0n1 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.168 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.427 nvme0n1 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.427 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.687 nvme0n1 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.687 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.946 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.947 nvme0n1 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.947 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.206 22:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.464 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.465 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.724 nvme0n1 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.724 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.983 nvme0n1 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.983 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.242 22:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 nvme0n1 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.501 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.760 nvme0n1 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.760 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.761 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.020 nvme0n1 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.020 22:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.398 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.966 nvme0n1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.966 22:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.225 nvme0n1 00:34:22.225 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.225 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.225 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.226 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.794 nvme0n1 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.794 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.795 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.054 nvme0n1 00:34:23.054 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.054 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.054 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.054 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.054 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.312 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.312 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.313 22:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.313 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.571 nvme0n1 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:23.571 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.572 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.830 22:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.400 nvme0n1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.400 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.966 nvme0n1 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.966 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.967 22:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.535 nvme0n1 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.535 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.794 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.362 nvme0n1 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.362 22:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.362 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 nvme0n1 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.189 nvme0n1 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.189 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.190 22:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.190 nvme0n1 00:34:27.190 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.190 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.190 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.190 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.190 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.449 nvme0n1 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.449 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.709 nvme0n1 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.709 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.710 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.710 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.969 nvme0n1 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.969 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.228 nvme0n1 00:34:28.228 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.228 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.228 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.229 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 22:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 nvme0n1 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.488 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.489 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 nvme0n1 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 nvme0n1 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.266 nvme0n1 00:34:29.266 22:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.266 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.525 nvme0n1 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.525 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.526 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.784 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.785 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.785 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.785 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 nvme0n1 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 22:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 nvme0n1 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 nvme0n1 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.820 nvme0n1 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.820 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.079 22:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 nvme0n1 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.338 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.339 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.339 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.339 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.906 nvme0n1 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.906 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.907 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.907 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.907 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.907 22:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.166 nvme0n1 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.166 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.425 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 nvme0n1 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.684 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.252 nvme0n1 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.252 22:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.820 nvme0n1 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.820 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.821 22:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.388 nvme0n1 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.388 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:34.646 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.647 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.214 nvme0n1 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.214 22:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.782 nvme0n1 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.782 22:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.350 nvme0n1 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.350 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.351 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.610 nvme0n1 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.610 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.611 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.870 nvme0n1 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.870 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.129 nvme0n1 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.129 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.130 22:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.389 nvme0n1 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.389 nvme0n1 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.389 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.649 nvme0n1 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.649 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.908 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.909 nvme0n1 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.909 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.168 22:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 nvme0n1 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.168 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 nvme0n1 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.428 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.687 nvme0n1 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.687 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.688 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.688 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.688 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.946 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.947 nvme0n1 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.947 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:39.207 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.208 22:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.466 nvme0n1 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.466 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.724 nvme0n1 00:34:39.724 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.724 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.724 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.724 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.725 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.983 nvme0n1 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.983 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.241 22:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.500 nvme0n1 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.500 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.501 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.759 nvme0n1 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.759 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.017 22:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.276 nvme0n1 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.276 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 nvme0n1 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.843 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.101 nvme0n1 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.101 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.360 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.360 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.360 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.360 22:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.360 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.619 nvme0n1 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNzIzMjdmZjQxM2Q0Y2ZiMDY1YjY3ZjVlODFiYjKJojl7: 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1OGI5MzEzNzkyYTMzMGU1MGY2MTU2MGQ0ZDFlZDc0NzYzYTM0NDRlYjZlYjhjZDg5Y2JiODUyMjkxNTZmMVO8jYw=: 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.619 22:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.186 nvme0n1 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.186 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.444 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.012 nvme0n1 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.012 22:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.579 nvme0n1 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODlmOWYyZDhiN2VlNTEwZjBlMjhkOWYxMjNmNTgzYmY0YWZiZGQ2NTU2ODIyYzdjN/IZWw==: 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZkODFjYzQxNTlkODU4Mjk5ZmQzMzA0MGI5ZTlkNTEOuidh: 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.579 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.146 nvme0n1 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.146 22:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA4NjU2OTNiNmUwMzExYTFkMDBkYmU1NTg1NTI5MmIzYmI0NzM4Mzc0ZGZhZDBmZTYwYjU1YmIzYzNiODA3MW6ZCwY=: 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.146 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.404 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.971 nvme0n1 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:45.971 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.972 request: 00:34:45.972 { 00:34:45.972 "name": "nvme0", 00:34:45.972 "trtype": "tcp", 00:34:45.972 "traddr": "10.0.0.1", 00:34:45.972 "adrfam": "ipv4", 00:34:45.972 "trsvcid": "4420", 00:34:45.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:45.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:45.972 "prchk_reftag": false, 00:34:45.972 "prchk_guard": false, 00:34:45.972 "hdgst": false, 00:34:45.972 "ddgst": false, 00:34:45.972 "allow_unrecognized_csi": false, 00:34:45.972 "method": "bdev_nvme_attach_controller", 00:34:45.972 "req_id": 1 00:34:45.972 } 00:34:45.972 Got JSON-RPC error response 00:34:45.972 response: 00:34:45.972 { 00:34:45.972 "code": -5, 00:34:45.972 "message": "Input/output error" 00:34:45.972 } 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.972 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.231 request: 00:34:46.231 { 00:34:46.231 "name": "nvme0", 00:34:46.231 "trtype": "tcp", 00:34:46.231 "traddr": "10.0.0.1", 00:34:46.231 "adrfam": "ipv4", 00:34:46.231 "trsvcid": "4420", 00:34:46.231 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:46.231 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:46.231 "prchk_reftag": false, 00:34:46.231 "prchk_guard": false, 00:34:46.231 "hdgst": false, 00:34:46.231 "ddgst": false, 00:34:46.231 "dhchap_key": "key2", 00:34:46.231 "allow_unrecognized_csi": false, 00:34:46.231 "method": "bdev_nvme_attach_controller", 00:34:46.231 "req_id": 1 00:34:46.231 } 00:34:46.231 Got JSON-RPC error response 00:34:46.231 response: 00:34:46.231 { 00:34:46.231 "code": -5, 00:34:46.231 "message": "Input/output error" 00:34:46.231 } 00:34:46.231 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.231 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.232 request: 00:34:46.232 { 00:34:46.232 "name": "nvme0", 00:34:46.232 "trtype": "tcp", 00:34:46.232 "traddr": "10.0.0.1", 00:34:46.232 "adrfam": "ipv4", 00:34:46.232 "trsvcid": "4420", 00:34:46.232 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:46.232 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:46.232 "prchk_reftag": false, 00:34:46.232 "prchk_guard": false, 00:34:46.232 "hdgst": false, 00:34:46.232 "ddgst": false, 00:34:46.232 "dhchap_key": "key1", 00:34:46.232 "dhchap_ctrlr_key": "ckey2", 00:34:46.232 "allow_unrecognized_csi": false, 00:34:46.232 "method": "bdev_nvme_attach_controller", 00:34:46.232 "req_id": 1 00:34:46.232 } 00:34:46.232 Got JSON-RPC error response 00:34:46.232 response: 00:34:46.232 { 00:34:46.232 "code": -5, 00:34:46.232 "message": "Input/output error" 00:34:46.232 } 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.232 22:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.491 nvme0n1 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.491 request: 00:34:46.491 { 00:34:46.491 "name": "nvme0", 00:34:46.491 "dhchap_key": "key1", 00:34:46.491 "dhchap_ctrlr_key": "ckey2", 00:34:46.491 "method": "bdev_nvme_set_keys", 00:34:46.491 "req_id": 1 00:34:46.491 } 00:34:46.491 Got JSON-RPC error response 00:34:46.491 response: 00:34:46.491 { 00:34:46.491 "code": -13, 00:34:46.491 "message": "Permission denied" 00:34:46.491 } 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:46.491 22:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:47.866 22:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWMxMTZhMzI2NjFhNzMwNTE4MTE4ZGNlZGMwOWI4NjZkYjI1MmYzMTRkYmMwMDI2sBL3EQ==: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE1MmQ5NzM4ZDhjYjEyOWY5ODQ3MzlkZDRkNGEwMGUwNjYzMmQ5ZDY0NzgzMzIxBKFtFA==: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.803 nvme0n1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjNzY1Mjg2OTE4ZmQxMmIxNTliMTAzYTBkZTdhMWXCDE1V: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: ]] 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM4Yjk2YjhmMTU3YzE2YmIxYmNhMjEyMDAwNTljOTJ7Rppp: 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.803 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.803 request: 00:34:48.803 { 00:34:48.803 "name": "nvme0", 00:34:48.803 "dhchap_key": "key2", 00:34:48.803 "dhchap_ctrlr_key": "ckey1", 00:34:48.803 "method": "bdev_nvme_set_keys", 00:34:48.803 "req_id": 1 00:34:48.803 } 00:34:48.803 Got JSON-RPC error response 00:34:48.803 response: 00:34:48.803 { 00:34:48.803 "code": -13, 00:34:48.803 "message": "Permission denied" 00:34:48.803 } 00:34:48.804 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:48.804 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:48.804 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:48.804 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:48.804 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:49.062 22:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:49.998 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.999 rmmod nvme_tcp 00:34:49.999 rmmod nvme_fabrics 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 513750 ']' 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 513750 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 513750 ']' 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 513750 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513750 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513750' 00:34:49.999 killing process with pid 513750 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 513750 00:34:49.999 22:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 513750 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.258 22:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:52.791 22:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.325 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.325 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.262 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:56.262 22:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x1v /tmp/spdk.key-null.AkH /tmp/spdk.key-sha256.2oR /tmp/spdk.key-sha384.hB5 /tmp/spdk.key-sha512.qRx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:56.262 22:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.796 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:58.796 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:58.796 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:58.796 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:58.796 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:59.055 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:59.055 00:34:59.055 real 0m55.645s 00:34:59.055 user 0m50.558s 00:34:59.055 sys 0m12.451s 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.055 ************************************ 00:34:59.055 END TEST nvmf_auth_host 00:34:59.055 ************************************ 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.055 22:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.055 ************************************ 00:34:59.055 START TEST nvmf_digest 00:34:59.055 ************************************ 00:34:59.056 22:44:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:59.315 * Looking for test storage... 00:34:59.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:59.315 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:59.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.316 --rc genhtml_branch_coverage=1 00:34:59.316 --rc genhtml_function_coverage=1 00:34:59.316 --rc genhtml_legend=1 00:34:59.316 --rc geninfo_all_blocks=1 00:34:59.316 --rc geninfo_unexecuted_blocks=1 00:34:59.316 00:34:59.316 ' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:59.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.316 --rc genhtml_branch_coverage=1 00:34:59.316 --rc genhtml_function_coverage=1 00:34:59.316 --rc genhtml_legend=1 00:34:59.316 --rc geninfo_all_blocks=1 00:34:59.316 --rc geninfo_unexecuted_blocks=1 00:34:59.316 00:34:59.316 ' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:59.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.316 --rc genhtml_branch_coverage=1 00:34:59.316 --rc genhtml_function_coverage=1 00:34:59.316 --rc genhtml_legend=1 00:34:59.316 --rc geninfo_all_blocks=1 00:34:59.316 --rc geninfo_unexecuted_blocks=1 00:34:59.316 00:34:59.316 ' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:59.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.316 --rc genhtml_branch_coverage=1 00:34:59.316 --rc genhtml_function_coverage=1 00:34:59.316 --rc genhtml_legend=1 00:34:59.316 --rc geninfo_all_blocks=1 00:34:59.316 --rc geninfo_unexecuted_blocks=1 00:34:59.316 00:34:59.316 ' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:59.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:59.316 22:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:05.886 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:05.886 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:05.886 Found net devices under 0000:af:00.0: cvl_0_0 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:05.886 Found net devices under 0000:af:00.1: cvl_0_1 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.886 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:05.887 22:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:05.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:35:05.887 00:35:05.887 --- 10.0.0.2 ping statistics --- 00:35:05.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.887 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:35:05.887 00:35:05.887 --- 10.0.0.1 ping statistics --- 00:35:05.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.887 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 ************************************ 00:35:05.887 START TEST nvmf_digest_clean 00:35:05.887 ************************************ 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=527591 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 527591 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527591 ']' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 [2024-12-14 22:44:26.208143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:05.887 [2024-12-14 22:44:26.208187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.887 [2024-12-14 22:44:26.289035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.887 [2024-12-14 22:44:26.310069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.887 [2024-12-14 22:44:26.310106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.887 [2024-12-14 22:44:26.310113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.887 [2024-12-14 22:44:26.310119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.887 [2024-12-14 22:44:26.310127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.887 [2024-12-14 22:44:26.310640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 null0 00:35:05.887 [2024-12-14 22:44:26.472713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.887 [2024-12-14 22:44:26.496908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527703 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527703 /var/tmp/bperf.sock 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527703 ']' 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.887 [2024-12-14 22:44:26.548113] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:05.887 [2024-12-14 22:44:26.548152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527703 ] 00:35:05.887 [2024-12-14 22:44:26.622625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.887 [2024-12-14 22:44:26.644928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.887 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.888 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:05.888 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:05.888 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:05.888 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:06.146 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.146 22:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.405 nvme0n1 00:35:06.405 22:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:06.405 22:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.405 Running I/O for 2 seconds... 00:35:08.717 24738.00 IOPS, 96.63 MiB/s [2024-12-14T21:44:29.601Z] 25123.50 IOPS, 98.14 MiB/s 00:35:08.717 Latency(us) 00:35:08.717 [2024-12-14T21:44:29.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.717 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:08.717 nvme0n1 : 2.00 25145.27 98.22 0.00 0.00 5085.80 2512.21 11234.74 00:35:08.717 [2024-12-14T21:44:29.601Z] =================================================================================================================== 00:35:08.717 [2024-12-14T21:44:29.601Z] Total : 25145.27 98.22 0.00 0.00 5085.80 2512.21 11234.74 00:35:08.717 { 00:35:08.717 "results": [ 00:35:08.717 { 00:35:08.717 "job": "nvme0n1", 00:35:08.717 "core_mask": "0x2", 00:35:08.717 "workload": "randread", 00:35:08.717 "status": "finished", 00:35:08.717 "queue_depth": 128, 00:35:08.717 "io_size": 4096, 00:35:08.717 "runtime": 2.004194, 00:35:08.717 "iops": 25145.270368038226, 00:35:08.717 "mibps": 98.22371237514932, 00:35:08.717 "io_failed": 0, 00:35:08.717 "io_timeout": 0, 00:35:08.717 "avg_latency_us": 5085.800020107416, 00:35:08.717 "min_latency_us": 2512.213333333333, 00:35:08.717 "max_latency_us": 11234.742857142857 00:35:08.717 } 00:35:08.717 ], 00:35:08.717 "core_count": 1 00:35:08.717 } 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:08.717 | select(.opcode=="crc32c") 00:35:08.717 | "\(.module_name) \(.executed)"' 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527703 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527703 ']' 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527703 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527703 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527703' 00:35:08.717 killing process with pid 527703 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527703 00:35:08.717 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.717 00:35:08.717 Latency(us) 00:35:08.717 [2024-12-14T21:44:29.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.717 [2024-12-14T21:44:29.601Z] =================================================================================================================== 00:35:08.717 [2024-12-14T21:44:29.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.717 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527703 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528168 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528168 /var/tmp/bperf.sock 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528168 ']' 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.977 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.977 [2024-12-14 22:44:29.765553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:08.977 [2024-12-14 22:44:29.765606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528168 ] 00:35:08.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.977 Zero copy mechanism will not be used. 00:35:08.977 [2024-12-14 22:44:29.841031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.977 [2024-12-14 22:44:29.860142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.236 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.236 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:09.236 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:09.236 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:09.236 22:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:09.494 22:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.495 22:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.753 nvme0n1 00:35:09.753 22:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:09.753 22:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.753 Zero copy mechanism will not be used. 00:35:09.753 Running I/O for 2 seconds... 00:35:12.064 5992.00 IOPS, 749.00 MiB/s [2024-12-14T21:44:32.948Z] 6055.50 IOPS, 756.94 MiB/s 00:35:12.064 Latency(us) 00:35:12.064 [2024-12-14T21:44:32.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.064 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:12.064 nvme0n1 : 2.00 6057.18 757.15 0.00 0.00 2638.88 643.66 6023.07 00:35:12.064 [2024-12-14T21:44:32.948Z] =================================================================================================================== 00:35:12.064 [2024-12-14T21:44:32.948Z] Total : 6057.18 757.15 0.00 0.00 2638.88 643.66 6023.07 00:35:12.064 { 00:35:12.064 "results": [ 00:35:12.064 { 00:35:12.064 "job": "nvme0n1", 00:35:12.064 "core_mask": "0x2", 00:35:12.064 "workload": "randread", 00:35:12.064 "status": "finished", 00:35:12.065 "queue_depth": 16, 00:35:12.065 "io_size": 131072, 00:35:12.065 "runtime": 2.002086, 00:35:12.065 "iops": 6057.1823587997715, 00:35:12.065 "mibps": 757.1477948499714, 00:35:12.065 "io_failed": 0, 00:35:12.065 "io_timeout": 0, 00:35:12.065 "avg_latency_us": 2638.879286911928, 00:35:12.065 "min_latency_us": 643.6571428571428, 00:35:12.065 "max_latency_us": 6023.070476190476 00:35:12.065 } 00:35:12.065 ], 00:35:12.065 "core_count": 1 00:35:12.065 } 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:12.065 | select(.opcode=="crc32c") 00:35:12.065 | "\(.module_name) \(.executed)"' 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528168 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528168 ']' 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528168 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528168 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528168' 00:35:12.065 killing process with pid 528168 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528168 00:35:12.065 Received shutdown signal, test time was about 2.000000 seconds 00:35:12.065 00:35:12.065 Latency(us) 00:35:12.065 [2024-12-14T21:44:32.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.065 [2024-12-14T21:44:32.949Z] =================================================================================================================== 00:35:12.065 [2024-12-14T21:44:32.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:12.065 22:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528168 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528631 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528631 /var/tmp/bperf.sock 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528631 ']' 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.324 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:12.324 [2024-12-14 22:44:33.071375] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:12.324 [2024-12-14 22:44:33.071423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528631 ] 00:35:12.324 [2024-12-14 22:44:33.146712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.324 [2024-12-14 22:44:33.169213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.582 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.582 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:12.582 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.582 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.582 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:12.841 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.841 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.100 nvme0n1 00:35:13.100 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:13.100 22:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.100 Running I/O for 2 seconds... 00:35:15.410 27478.00 IOPS, 107.34 MiB/s [2024-12-14T21:44:36.294Z] 27599.00 IOPS, 107.81 MiB/s 00:35:15.410 Latency(us) 00:35:15.410 [2024-12-14T21:44:36.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.410 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.410 nvme0n1 : 2.01 27600.76 107.82 0.00 0.00 4629.03 3464.05 11172.33 00:35:15.410 [2024-12-14T21:44:36.294Z] =================================================================================================================== 00:35:15.410 [2024-12-14T21:44:36.294Z] Total : 27600.76 107.82 0.00 0.00 4629.03 3464.05 11172.33 00:35:15.410 { 00:35:15.410 "results": [ 00:35:15.410 { 00:35:15.410 "job": "nvme0n1", 00:35:15.410 "core_mask": "0x2", 00:35:15.410 "workload": "randwrite", 00:35:15.410 "status": "finished", 00:35:15.410 "queue_depth": 128, 00:35:15.410 "io_size": 4096, 00:35:15.410 "runtime": 2.005959, 00:35:15.410 "iops": 27600.763525077033, 00:35:15.410 "mibps": 107.81548251983216, 00:35:15.410 "io_failed": 0, 00:35:15.410 "io_timeout": 0, 00:35:15.410 "avg_latency_us": 4629.0273846593145, 00:35:15.410 "min_latency_us": 3464.0457142857144, 00:35:15.410 "max_latency_us": 11172.327619047619 00:35:15.410 } 00:35:15.410 ], 00:35:15.410 "core_count": 1 00:35:15.410 } 00:35:15.411 22:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:15.411 22:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:15.411 22:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:15.411 22:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:15.411 | select(.opcode=="crc32c") 00:35:15.411 | "\(.module_name) \(.executed)"' 00:35:15.411 22:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528631 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528631 ']' 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528631 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528631 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528631' 00:35:15.411 killing process with pid 528631 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528631 00:35:15.411 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.411 00:35:15.411 Latency(us) 00:35:15.411 [2024-12-14T21:44:36.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.411 [2024-12-14T21:44:36.295Z] =================================================================================================================== 00:35:15.411 [2024-12-14T21:44:36.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.411 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528631 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=529296 00:35:15.669 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 529296 /var/tmp/bperf.sock 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 529296 ']' 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.670 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.670 [2024-12-14 22:44:36.406346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:15.670 [2024-12-14 22:44:36.406390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529296 ] 00:35:15.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.670 Zero copy mechanism will not be used. 00:35:15.670 [2024-12-14 22:44:36.480821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.670 [2024-12-14 22:44:36.503505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.929 22:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.496 nvme0n1 00:35:16.496 22:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:16.496 22:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:16.496 Zero copy mechanism will not be used. 00:35:16.496 Running I/O for 2 seconds... 00:35:18.810 5996.00 IOPS, 749.50 MiB/s [2024-12-14T21:44:39.694Z] 6601.00 IOPS, 825.12 MiB/s 00:35:18.810 Latency(us) 00:35:18.810 [2024-12-14T21:44:39.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.810 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:18.810 nvme0n1 : 2.00 6597.19 824.65 0.00 0.00 2420.84 1810.04 12545.46 00:35:18.810 [2024-12-14T21:44:39.694Z] =================================================================================================================== 00:35:18.810 [2024-12-14T21:44:39.694Z] Total : 6597.19 824.65 0.00 0.00 2420.84 1810.04 12545.46 00:35:18.810 { 00:35:18.810 "results": [ 00:35:18.810 { 00:35:18.810 "job": "nvme0n1", 00:35:18.810 "core_mask": "0x2", 00:35:18.810 "workload": "randwrite", 00:35:18.810 "status": "finished", 00:35:18.810 "queue_depth": 16, 00:35:18.810 "io_size": 131072, 00:35:18.810 "runtime": 2.004035, 00:35:18.810 "iops": 6597.190168834377, 00:35:18.810 "mibps": 824.6487711042971, 00:35:18.810 "io_failed": 0, 00:35:18.810 "io_timeout": 0, 00:35:18.810 "avg_latency_us": 2420.836452253089, 00:35:18.810 "min_latency_us": 1810.0419047619048, 00:35:18.810 "max_latency_us": 12545.462857142857 00:35:18.810 } 00:35:18.810 ], 00:35:18.810 "core_count": 1 00:35:18.810 } 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:18.811 | select(.opcode=="crc32c") 00:35:18.811 | "\(.module_name) \(.executed)"' 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 529296 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 529296 ']' 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 529296 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529296 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529296' 00:35:18.811 killing process with pid 529296 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 529296 00:35:18.811 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.811 00:35:18.811 Latency(us) 00:35:18.811 [2024-12-14T21:44:39.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.811 [2024-12-14T21:44:39.695Z] =================================================================================================================== 00:35:18.811 [2024-12-14T21:44:39.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.811 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 529296 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 527591 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527591 ']' 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527591 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527591 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527591' 00:35:19.070 killing process with pid 527591 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527591 00:35:19.070 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527591 00:35:19.329 00:35:19.329 real 0m13.809s 00:35:19.329 user 0m26.381s 00:35:19.329 sys 0m4.584s 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.329 ************************************ 00:35:19.329 END TEST nvmf_digest_clean 00:35:19.329 ************************************ 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.329 22:44:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.329 ************************************ 00:35:19.329 START TEST nvmf_digest_error 00:35:19.329 ************************************ 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=529793 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 529793 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529793 ']' 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.329 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.329 [2024-12-14 22:44:40.089030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:19.329 [2024-12-14 22:44:40.089080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.329 [2024-12-14 22:44:40.152285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.329 [2024-12-14 22:44:40.174935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.329 [2024-12-14 22:44:40.174969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.329 [2024-12-14 22:44:40.174976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.329 [2024-12-14 22:44:40.174983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.329 [2024-12-14 22:44:40.174988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.329 [2024-12-14 22:44:40.175493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.589 [2024-12-14 22:44:40.316109] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.589 null0 00:35:19.589 [2024-12-14 22:44:40.407847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.589 [2024-12-14 22:44:40.432042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=529921 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 529921 /var/tmp/bperf.sock 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529921 ']' 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.589 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 [2024-12-14 22:44:40.485656] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:19.848 [2024-12-14 22:44:40.485697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529921 ] 00:35:19.848 [2024-12-14 22:44:40.559626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.848 [2024-12-14 22:44:40.582160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.848 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.848 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:19.848 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.848 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.107 22:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.365 nvme0n1 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:20.365 22:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.624 Running I/O for 2 seconds... 00:35:20.624 [2024-12-14 22:44:41.302854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.302886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.302898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.315343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.315366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.315375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.326361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.326381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.326390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.335003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.335036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.335045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.345759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.345780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.345788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.356999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.357019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.357027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.368287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.368308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.368316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.624 [2024-12-14 22:44:41.376496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.624 [2024-12-14 22:44:41.376516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.624 [2024-12-14 22:44:41.376524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.388089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.388109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.388117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.400396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.400416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.400424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.411871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.411892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.411900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.422466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.422486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.422494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.432449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.432469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.432480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.440678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.440698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.453441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.453462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.453470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.462762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.462782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.462791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.474746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.474766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.474775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.484405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.484424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.492936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.492956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.492964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.625 [2024-12-14 22:44:41.503665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.625 [2024-12-14 22:44:41.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.625 [2024-12-14 22:44:41.503694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.884 [2024-12-14 22:44:41.512394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.884 [2024-12-14 22:44:41.512415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.884 [2024-12-14 22:44:41.512423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.884 [2024-12-14 22:44:41.523647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.884 [2024-12-14 22:44:41.523670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.884 [2024-12-14 22:44:41.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.884 [2024-12-14 22:44:41.532196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.884 [2024-12-14 22:44:41.532215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.884 [2024-12-14 22:44:41.532223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.884 [2024-12-14 22:44:41.543827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.884 [2024-12-14 22:44:41.543846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.884 [2024-12-14 22:44:41.543854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.884 [2024-12-14 22:44:41.552635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.552662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.564838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.564858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.564866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.575213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.575232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.575241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.583596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.583617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.583625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.594994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.595022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.604962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.604983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.604991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.614518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.614538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.624353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.624373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.624381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.634520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.634550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.643170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.643191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.643199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.652535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.652555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.652563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.664710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.664732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.664742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.673231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.673252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.673260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.684844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.684864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.684872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.695087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.695108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.703573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.703594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.715943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.715963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.715972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.725508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.725528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.725537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.733459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.733478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.743918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.743939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.743947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.752781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.752801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.752809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.885 [2024-12-14 22:44:41.763663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:20.885 [2024-12-14 22:44:41.763683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.885 [2024-12-14 22:44:41.763691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.772446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.772467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.772475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.783071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.783093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.783102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.793092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.793120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.802333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.802361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.812885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.812911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.824364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.824385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.824393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.833014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.833035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.845204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.845224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.845233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.857385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.857413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.857423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.868868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.868889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.868901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.877578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.877598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.877606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.890010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.890030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.890038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.901646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.901665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.901673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.909825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.909852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.919694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.919714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.919722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.929004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.929031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.938817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.938836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.938844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.946958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.946978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.946986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.145 [2024-12-14 22:44:41.958890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.145 [2024-12-14 22:44:41.958922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.145 [2024-12-14 22:44:41.958930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:41.967253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:41.967274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:41.967282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:41.978345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:41.978365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:41.978373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:41.988432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:41.988452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:41.988460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:41.997095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:41.997114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:41.997122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:42.008694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:42.008714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:42.008722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:42.018699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:42.018719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:42.018727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.146 [2024-12-14 22:44:42.026763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.146 [2024-12-14 22:44:42.026783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.146 [2024-12-14 22:44:42.026791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.037786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.037807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.037815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.050005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.050025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.050033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.060425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.060445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.071839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.071859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.071867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.084408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.084428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.095572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.095591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.104613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.104633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.104641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.116258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.116279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.116287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.128409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.128428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.128436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.137488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.137507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.137519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.145737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.145756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.145764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.156639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.156658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.156665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.164971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.164991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.164999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.175889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.175912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.175921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.188542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.188561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.188569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.200205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.200224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.200232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.209294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.209313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.209321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.405 [2024-12-14 22:44:42.218795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.405 [2024-12-14 22:44:42.218814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.405 [2024-12-14 22:44:42.218822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.228129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.228151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.228160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.240177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.240205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.253008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.253030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.253038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.260524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.260542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.260551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.272160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.272180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.272188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.406 [2024-12-14 22:44:42.279993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.406 [2024-12-14 22:44:42.280013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.406 [2024-12-14 22:44:42.280021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 24570.00 IOPS, 95.98 MiB/s [2024-12-14T21:44:42.549Z] [2024-12-14 22:44:42.292430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.292451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.292460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.303880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.303900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.303913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.313040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.313059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.313071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.324940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.324959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.324967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.336785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.336805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.348578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.348596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.348605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.357563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.357583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.357591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.368478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.368497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.378717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.378737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.378745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.387492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.387510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.387518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.397290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.665 [2024-12-14 22:44:42.397309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.665 [2024-12-14 22:44:42.397317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.665 [2024-12-14 22:44:42.406596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.406618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.406626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.415068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.415088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.415096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.424027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.424054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.436067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.436095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.447756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.447774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.447783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.456918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.456938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.456946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.467786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.467805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.467813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.479357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.479376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.479385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.490002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.490021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.490029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.498803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.498823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.498831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.511196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.511215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.511224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.522420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.522449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.530777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.530797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.530805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.666 [2024-12-14 22:44:42.540884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.666 [2024-12-14 22:44:42.540910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.666 [2024-12-14 22:44:42.540919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.553395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.553416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.553424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.561586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.561605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.561614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.573507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.573535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.586074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.586095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.586107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.594067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.594086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.594094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.605706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.605735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.615374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.615395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.615403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.627772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.627792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.627801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.638619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.638639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.638647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.647578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.647597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.647604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.657054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.657073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.657081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.666850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.666869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.666877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.674848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.674871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.674878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.686823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.686843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.686851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.696655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.696675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.696683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.706365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.706383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.706391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.714689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.714709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.714717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.723752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.723771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.723779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.733836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.733856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.733864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.743400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.743419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.753057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.753076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.753088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.926 [2024-12-14 22:44:42.761375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.926 [2024-12-14 22:44:42.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.926 [2024-12-14 22:44:42.761402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.927 [2024-12-14 22:44:42.773020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.927 [2024-12-14 22:44:42.773040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.927 [2024-12-14 22:44:42.773048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.927 [2024-12-14 22:44:42.783008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.927 [2024-12-14 22:44:42.783027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.927 [2024-12-14 22:44:42.783035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.927 [2024-12-14 22:44:42.790836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.927 [2024-12-14 22:44:42.790855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.927 [2024-12-14 22:44:42.790864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.927 [2024-12-14 22:44:42.801835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:21.927 [2024-12-14 22:44:42.801855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.927 [2024-12-14 22:44:42.801863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.814768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.814789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.823139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.823167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.834921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.834941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.834949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.845294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.845317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.845326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.855573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.855593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.855600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.863839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.863859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.863867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.875219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.875240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.875248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.887871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.887891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.887900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.900055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.900075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.900083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.910238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.910257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.919325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.919344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.919352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.929963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.929983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.929991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.939932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.939951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.939961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.948399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.948427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.957476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.957495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.957503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.966327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.966346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.966354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.976073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.976093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.976101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.988216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.186 [2024-12-14 22:44:42.988236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.186 [2024-12-14 22:44:42.988244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.186 [2024-12-14 22:44:42.999570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:42.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:42.999599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.010533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.010553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.010561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.019399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.019434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.031509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.031528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.031536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.041300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.041319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.041327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.049997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.060460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.060480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.060488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.187 [2024-12-14 22:44:43.069485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.187 [2024-12-14 22:44:43.069506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.187 [2024-12-14 22:44:43.069515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.078702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.078723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.078731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.088851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.088874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.088882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.098282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.098304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.098312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.106559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.106583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.106591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.116968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.116989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.116998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.126034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.126053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.126061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.135097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.135118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.135126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.144006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.144026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.144034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.152770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.152789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.152798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.161447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.446 [2024-12-14 22:44:43.161466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.446 [2024-12-14 22:44:43.161474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.446 [2024-12-14 22:44:43.171746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.171766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.171775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.180927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.180946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.180954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.190628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.190648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.190656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.198949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.198969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.208422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.208442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.208450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.218093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.218113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.218121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.227000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.227020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.227028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.235488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.235516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.246377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.246396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.246404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.258802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.258822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.258830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.271029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.271054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.271062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 [2024-12-14 22:44:43.281090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.281111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.281119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 24953.00 IOPS, 97.47 MiB/s [2024-12-14T21:44:43.331Z] [2024-12-14 22:44:43.289842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2263990) 00:35:22.447 [2024-12-14 22:44:43.289862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.447 [2024-12-14 22:44:43.289870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.447 00:35:22.447 Latency(us) 00:35:22.447 [2024-12-14T21:44:43.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.447 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:22.447 nvme0n1 : 2.00 24960.71 97.50 0.00 0.00 5122.33 2371.78 18474.91 00:35:22.447 [2024-12-14T21:44:43.331Z] =================================================================================================================== 00:35:22.447 [2024-12-14T21:44:43.331Z] Total : 24960.71 97.50 0.00 0.00 5122.33 2371.78 18474.91 00:35:22.447 { 00:35:22.447 "results": [ 00:35:22.447 { 00:35:22.447 "job": "nvme0n1", 00:35:22.447 "core_mask": "0x2", 00:35:22.447 "workload": "randread", 00:35:22.447 "status": "finished", 00:35:22.447 "queue_depth": 128, 00:35:22.447 "io_size": 4096, 00:35:22.447 "runtime": 2.003509, 00:35:22.447 "iops": 24960.706440550053, 00:35:22.447 "mibps": 97.50275953339865, 00:35:22.447 "io_failed": 0, 00:35:22.447 "io_timeout": 0, 00:35:22.447 "avg_latency_us": 5122.328878478064, 00:35:22.447 "min_latency_us": 2371.7790476190476, 00:35:22.447 "max_latency_us": 18474.910476190475 00:35:22.447 } 00:35:22.447 ], 00:35:22.447 "core_count": 1 00:35:22.447 } 00:35:22.447 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:22.447 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:22.447 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:22.447 | .driver_specific 00:35:22.447 | .nvme_error 00:35:22.447 | .status_code 00:35:22.447 | .command_transient_transport_error' 00:35:22.447 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 529921 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529921 ']' 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529921 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529921 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529921' 00:35:22.706 killing process with pid 529921 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529921 00:35:22.706 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.706 00:35:22.706 Latency(us) 00:35:22.706 [2024-12-14T21:44:43.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.706 [2024-12-14T21:44:43.590Z] =================================================================================================================== 00:35:22.706 [2024-12-14T21:44:43.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.706 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529921 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530476 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530476 /var/tmp/bperf.sock 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530476 ']' 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.965 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.965 [2024-12-14 22:44:43.761073] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:22.965 [2024-12-14 22:44:43.761126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530476 ] 00:35:22.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.965 Zero copy mechanism will not be used. 00:35:22.965 [2024-12-14 22:44:43.834966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.224 [2024-12-14 22:44:43.854316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.224 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.224 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:23.224 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:23.224 22:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.482 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.741 nvme0n1 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:23.741 22:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:24.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.001 Zero copy mechanism will not be used. 00:35:24.001 Running I/O for 2 seconds... 00:35:24.001 [2024-12-14 22:44:44.717543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.717577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.717587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.722844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.722868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.722878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.728205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.728226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.728234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.733371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.738644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.738665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.743929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.743950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.743958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.749164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.749185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.749193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.754361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.754382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.754390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.759534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.759554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.759563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.764801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.764823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.770031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.770053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.775306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.001 [2024-12-14 22:44:44.775327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.001 [2024-12-14 22:44:44.775335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.001 [2024-12-14 22:44:44.780667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.780689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.780697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.785954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.785975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.785986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.791195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.791216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.796443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.796472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.801750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.801783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.807105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.807126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.807134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.812489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.812511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.812520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.818206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.818227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.818236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.823609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.823630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.823639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.828855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.828876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.834076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.834097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.834105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.839466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.839486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.839494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.844762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.844783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.844791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.850224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.850246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.850254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.855627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.855648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.855655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.861030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.861051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.861060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.866381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.866402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.866410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.871626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.871647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.871655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.876954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.876986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.002 [2024-12-14 22:44:44.882334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.002 [2024-12-14 22:44:44.882355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.002 [2024-12-14 22:44:44.882362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.262 [2024-12-14 22:44:44.887688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.887709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.887718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.893015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.893043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.898291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.898312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.903661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.903682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.903690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.908953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.908973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.908981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.914288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.914315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.914322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.919780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.919802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.919809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.925240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.925267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.925275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.930712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.930733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.930740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.936012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.936032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.936040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.941291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.941311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.941319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.946569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.946590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.946598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.951900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.951928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.957216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.957244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.962554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.962575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.962582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.967967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.967988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.967996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.973432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.973454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.973462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.978661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.978683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.978690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.983963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.983984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.983992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.989362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.989382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.989389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.994643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.994663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.994671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:44.999940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:44.999960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:44.999967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.005139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.005160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.005168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.010506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.010538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.015897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.015924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.015935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.021282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.021302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.021310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.026663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.026684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.026692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.032042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.032063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.032071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.037375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.037396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.037403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.042560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.042581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.042590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.263 [2024-12-14 22:44:45.047783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.263 [2024-12-14 22:44:45.047805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.263 [2024-12-14 22:44:45.047813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.053016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.053037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.053045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.058258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.058287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.063402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.063426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.063434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.068566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.068587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.068596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.073650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.073672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.073680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.079037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.079059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.079068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.084585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.084606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.084614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.090021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.090042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.090050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.095361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.095383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.095391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.100618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.100638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.100647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.106170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.106191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.106199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.111466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.111487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.111495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.116772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.116795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.116802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.121981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.122002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.122010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.127137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.127158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.127166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.132278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.132299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.264 [2024-12-14 22:44:45.132306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.264 [2024-12-14 22:44:45.137558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.264 [2024-12-14 22:44:45.137579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.265 [2024-12-14 22:44:45.137587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.265 [2024-12-14 22:44:45.142954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.265 [2024-12-14 22:44:45.142975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.265 [2024-12-14 22:44:45.142983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.148521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.148543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.148551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.153963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.153983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.153995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.159394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.159414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.159422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.164703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.164723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.164731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.170048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.175400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.175419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.175427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.178225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.178252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.184071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.184097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.188651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.188670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.188678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.193779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.193799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.193806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.198723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.198744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.198751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.203843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.203864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.203872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.209050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.595 [2024-12-14 22:44:45.209071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.595 [2024-12-14 22:44:45.209079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.595 [2024-12-14 22:44:45.213925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.213953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.219030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.219050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.219058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.224178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.224199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.224206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.229442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.229461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.229469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.234723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.234743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.234751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.239930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.239952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.239962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.245205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.245226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.250467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.250488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.250496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.255695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.255716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.260962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.260983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.260991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.266596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.266616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.266624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.274006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.274028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.274036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.281162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.281184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.281193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.287603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.287624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.287632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.293002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.293027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.293035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.297656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.297677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.297685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.302579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.302600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.302608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.307537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.307557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.307565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.312473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.312493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.312501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.317457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.596 [2024-12-14 22:44:45.317477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.596 [2024-12-14 22:44:45.317485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.596 [2024-12-14 22:44:45.322611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.322631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.322639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.327794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.327814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.327822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.332910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.332929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.332937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.338094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.338114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.338122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.343249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.343270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.343277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.348412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.348433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.348440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.353493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.353513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.353521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.358586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.358606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.358614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.363753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.363773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.363781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.368869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.368889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.368897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.373846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.373866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.376777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.376807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.381602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.381622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.386743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.386764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.386772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.391883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.391909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.391917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.397069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.397089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.397096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.402184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.402204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.402212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.407320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.407340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.407348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.412575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.412594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.412601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.597 [2024-12-14 22:44:45.417793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.597 [2024-12-14 22:44:45.417813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.597 [2024-12-14 22:44:45.417821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.423006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.423029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.423037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.428394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.428414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.428422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.433707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.433736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.438884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.438911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.438920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.444063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.444084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.444092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.449125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.449145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.449153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.454248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.454269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.454277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.459423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.459444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.459451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.464594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.464614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.464625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.469814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.469835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.469842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.598 [2024-12-14 22:44:45.476567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.598 [2024-12-14 22:44:45.476588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.598 [2024-12-14 22:44:45.476597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.869 [2024-12-14 22:44:45.482148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.869 [2024-12-14 22:44:45.482170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.869 [2024-12-14 22:44:45.482178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.869 [2024-12-14 22:44:45.488657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.869 [2024-12-14 22:44:45.488679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.488688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.495989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.496019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.503426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.503448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.503457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.511258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.511280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.518743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.518765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.526122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.526148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.526157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.533823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.533845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.533854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.541113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.541135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.541144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.546721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.546741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.546749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.551915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.551943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.557073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.557101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.562246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.562271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.562278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.567355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.567384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.572441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.572469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.577510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.577531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.577539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.582559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.582580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.582587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.587710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.587731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.587739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.592866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.592887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.592894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.598045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.598065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.603221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.603241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.603249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.608386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.608406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.608414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.613567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.613588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.613595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.618645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.618680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.623768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.623788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.623796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.628901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.628930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.870 [2024-12-14 22:44:45.634121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.870 [2024-12-14 22:44:45.634142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.870 [2024-12-14 22:44:45.634149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.639286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.639307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.639314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.644482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.644503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.644510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.649606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.649626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.654691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.654720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.659770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.659798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.664832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.664856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.664864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.669898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.669923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.669930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.675772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.675791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.675799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.680941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.680961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.680969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.686023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.686043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.686051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.691081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.691103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.691111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.696165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.696193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.701280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.701300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.701309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.706374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.706395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.706403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.712756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.712776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.871 5756.00 IOPS, 719.50 MiB/s [2024-12-14T21:44:45.755Z] [2024-12-14 22:44:45.717937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.717957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.717965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.723031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.723051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.723059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.728126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.728147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.733393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.733414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.733423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.738484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.738505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.738512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.743616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.743636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.743644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:24.871 [2024-12-14 22:44:45.748797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:24.871 [2024-12-14 22:44:45.748818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.871 [2024-12-14 22:44:45.748827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.753948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.753982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.759151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.759172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.759180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.764342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.764362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.764370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.769461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.769482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.769489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.774616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.774637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.774646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.779884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.779911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.779919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.784992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.785012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.785020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.790074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.790094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.790102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.795184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.795204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.795212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.800279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.800299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.800307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.805359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.805379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.141 [2024-12-14 22:44:45.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.141 [2024-12-14 22:44:45.810396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.141 [2024-12-14 22:44:45.810416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.815525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.815546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.815554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.820631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.820652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.820659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.825753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.825772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.825780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.830828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.830849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.830857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.835934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.835954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.835962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.840999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.841038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.846572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.846594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.846602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.852597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.852619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.852627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.859889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.859915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.859924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.867250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.867272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.867280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.874460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.874481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.874489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.882083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.882112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.889125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.889146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.889154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.896722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.896744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.896753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.904085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.904110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.904119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.911889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.911918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.911927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.919289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.919311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.919320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.926772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.926795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.926803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.934258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.934280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.934288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.142 [2024-12-14 22:44:45.942038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.142 [2024-12-14 22:44:45.942060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.142 [2024-12-14 22:44:45.942068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.949235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.949259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.949267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.956833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.956857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.956866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.963868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.963891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.963899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.971125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.971147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.971156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.978544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.978566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.986046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.986068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.986077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:45.994270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:45.994292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:45.994300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:46.001758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:46.001780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:46.001788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:46.010324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:46.010346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:46.010354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.143 [2024-12-14 22:44:46.018389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.143 [2024-12-14 22:44:46.018411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.143 [2024-12-14 22:44:46.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.026261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.026284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.026293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.034099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.034121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.034133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.041780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.041802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.041810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.049144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.049165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.049174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.056295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.056317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.056325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.061927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.061948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.061957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.067166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.422 [2024-12-14 22:44:46.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.422 [2024-12-14 22:44:46.067195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.422 [2024-12-14 22:44:46.072346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.072368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.072377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.078687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.078708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.078716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.084146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.084168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.084176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.089285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.089306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.089314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.094443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.094464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.094473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.099677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.099697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.099705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.104830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.104851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.104859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.110024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.110045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.110053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.115149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.115169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.115178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.120337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.120358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.120366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.125532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.125553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.125561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.130459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.130480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.130493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.135683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.135705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.135713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.140919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.140947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.146112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.146133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.146141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.151275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.151298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.151306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.156504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.156525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.156533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.161659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.161680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.161687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.166840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.166861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.166869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.172006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.172027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.177176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.177201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.177209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.182303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.182323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.182332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.187419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.187439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.187447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.192492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.192513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.192520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.197496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.197517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.197525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.202563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.202584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.202592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.207695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.207715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.207723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.212760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.212781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.212788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.217844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.217865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.217873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.222940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.222961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.222968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.228076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.228097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.228105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.233198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.233219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.233227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.238400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.238423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.238431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.243685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.243705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.243713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.248882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.248908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.248916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.254038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.254059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.254066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.259323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.259343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.259351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.264798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.264819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.264831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.269956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.269976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.269984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.276029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.276051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.276059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.281396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.281417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.281425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.286453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.286474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.286482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.291552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.291573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.291580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.423 [2024-12-14 22:44:46.296651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.423 [2024-12-14 22:44:46.296671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-14 22:44:46.296679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.301952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.301974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.301983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.307190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.307211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.307219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.312325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.312347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.312355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.317562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.317582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.317590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.322756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.322777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.322785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.327986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.328006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.333081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.333102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.333110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.338259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.338280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.343391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.343411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.343419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.348557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.348577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.348584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.353687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.353707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.353721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.358779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.358799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.358807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.363917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.363938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.363945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.368938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.368958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.368965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.374080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.374100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.374107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.379216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.379236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.379244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.384320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.384340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.389614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.389635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.389643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.394722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.394742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.394750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.399872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.399896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.399910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.404994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.405014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.405022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.410107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.410127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.410135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.415205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.415226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.415233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.690 [2024-12-14 22:44:46.420313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.690 [2024-12-14 22:44:46.420333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-12-14 22:44:46.420341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.425408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.425428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.425436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.430538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.430566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.435682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.435702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.435709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.440795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.440815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.440823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.445935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.445956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.445964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.451070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.451091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.451099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.456204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.456224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.456232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.461341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.461362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.461369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.466492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.466513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.466520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.471585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.471605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.471613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.476690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.476710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.476718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.481771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.481799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.486815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.486835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.486845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.491954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.491974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.491983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.497113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.497133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.497141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.502259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.502279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.502287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.507398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.507419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.507426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.512569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.512589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.512597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.517796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.517817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.517825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.523006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.523027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.523035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.528233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.528261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.533290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.533314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.533322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.538439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.538459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.538467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.543591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.543611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.543619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.548858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.548879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.548887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.553918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.553939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.553947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.558956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.558976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.691 [2024-12-14 22:44:46.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.691 [2024-12-14 22:44:46.563949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.691 [2024-12-14 22:44:46.563969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.692 [2024-12-14 22:44:46.563977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.692 [2024-12-14 22:44:46.568997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.692 [2024-12-14 22:44:46.569017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.692 [2024-12-14 22:44:46.569025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.958 [2024-12-14 22:44:46.574168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.958 [2024-12-14 22:44:46.574190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.958 [2024-12-14 22:44:46.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.958 [2024-12-14 22:44:46.579377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.958 [2024-12-14 22:44:46.579397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.579406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.584504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.584525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.584533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.589549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.589570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.589578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.594603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.594622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.594630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.599690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.599710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.599717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.604734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.604756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.604763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.609836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.609857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.609865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.614941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.614962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.614970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.620268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.620292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.620299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.625580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.625601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.625609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.631009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.631030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.636426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.636447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.636455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.641694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.641715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.641723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.647015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.647035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.647043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.652248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.652267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.652275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.657535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.657555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.663915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.663936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.663944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.671370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.671400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.678304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.678325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.678333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.684814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.684835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.684843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.691605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.691635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.697778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.697799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.697807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.705358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.705380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.705387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:25.959 [2024-12-14 22:44:46.712970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x252ac50) 00:35:25.959 [2024-12-14 22:44:46.712991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.959 [2024-12-14 22:44:46.712999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:25.959 5647.50 IOPS, 705.94 MiB/s 00:35:25.959 Latency(us) 00:35:25.959 [2024-12-14T21:44:46.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:25.959 nvme0n1 : 2.00 5644.56 705.57 0.00 0.00 2831.91 624.15 11484.40 00:35:25.959 [2024-12-14T21:44:46.843Z] =================================================================================================================== 00:35:25.959 [2024-12-14T21:44:46.843Z] Total : 5644.56 705.57 0.00 0.00 2831.91 624.15 11484.40 00:35:25.959 { 00:35:25.959 "results": [ 00:35:25.959 { 00:35:25.959 "job": "nvme0n1", 00:35:25.959 "core_mask": "0x2", 00:35:25.959 "workload": "randread", 00:35:25.959 "status": "finished", 00:35:25.959 "queue_depth": 16, 00:35:25.959 "io_size": 131072, 00:35:25.959 "runtime": 2.003877, 00:35:25.959 "iops": 5644.5580242699525, 00:35:25.959 "mibps": 705.5697530337441, 00:35:25.959 "io_failed": 0, 00:35:25.959 "io_timeout": 0, 00:35:25.959 "avg_latency_us": 2831.9074343980365, 00:35:25.960 "min_latency_us": 624.152380952381, 00:35:25.960 "max_latency_us": 11484.40380952381 00:35:25.960 } 00:35:25.960 ], 00:35:25.960 "core_count": 1 00:35:25.960 } 00:35:25.960 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:25.960 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:25.960 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:25.960 | .driver_specific 00:35:25.960 | .nvme_error 00:35:25.960 | .status_code 00:35:25.960 | .command_transient_transport_error' 00:35:25.960 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530476 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530476 ']' 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530476 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.230 22:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530476 00:35:26.230 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:26.230 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:26.230 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530476' 00:35:26.230 killing process with pid 530476 00:35:26.230 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530476 00:35:26.230 Received shutdown signal, test time was about 2.000000 seconds 00:35:26.230 00:35:26.230 Latency(us) 00:35:26.230 [2024-12-14T21:44:47.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.230 [2024-12-14T21:44:47.114Z] =================================================================================================================== 00:35:26.230 [2024-12-14T21:44:47.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.230 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530476 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530949 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530949 /var/tmp/bperf.sock 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530949 ']' 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:26.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.502 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.502 [2024-12-14 22:44:47.206228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:26.502 [2024-12-14 22:44:47.206280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530949 ] 00:35:26.502 [2024-12-14 22:44:47.278992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.502 [2024-12-14 22:44:47.298347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.774 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.048 nvme0n1 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:27.048 22:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:27.322 Running I/O for 2 seconds... 00:35:27.322 [2024-12-14 22:44:47.984865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eefae0 00:35:27.322 [2024-12-14 22:44:47.985974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:47.986004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:47.994018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee1b48 00:35:27.322 [2024-12-14 22:44:47.994646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:47.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.003050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee88f8 00:35:27.322 [2024-12-14 22:44:48.003910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.003930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.013151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee88f8 00:35:27.322 [2024-12-14 22:44:48.014641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.014659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.019679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef3a28 00:35:27.322 [2024-12-14 22:44:48.020390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.029032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7da8 00:35:27.322 [2024-12-14 22:44:48.029839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.029856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.038087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef4f40 00:35:27.322 [2024-12-14 22:44:48.038906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.038924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.046551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7c50 00:35:27.322 [2024-12-14 22:44:48.047252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.047269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.056963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eefae0 00:35:27.322 [2024-12-14 22:44:48.058047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.058066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.065127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efeb58 00:35:27.322 [2024-12-14 22:44:48.066036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.066057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.074325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee190 00:35:27.322 [2024-12-14 22:44:48.075264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.075281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.083430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb480 00:35:27.322 [2024-12-14 22:44:48.083928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.083945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.092869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee84c0 00:35:27.322 [2024-12-14 22:44:48.093494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.093513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.102146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef31b8 00:35:27.322 [2024-12-14 22:44:48.102881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.102899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.110511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7c50 00:35:27.322 [2024-12-14 22:44:48.111177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.111196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.119613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eea680 00:35:27.322 [2024-12-14 22:44:48.120695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.120714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.128908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.322 [2024-12-14 22:44:48.130134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.130152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.137926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb480 00:35:27.322 [2024-12-14 22:44:48.139104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.139122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.145914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee38d0 00:35:27.322 [2024-12-14 22:44:48.146682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.146700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.155005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6890 00:35:27.322 [2024-12-14 22:44:48.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.165176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edfdc0 00:35:27.322 [2024-12-14 22:44:48.166611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.166628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.174451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef31b8 00:35:27.322 [2024-12-14 22:44:48.176015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.180979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb760 00:35:27.322 [2024-12-14 22:44:48.181816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.181834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.190836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee190 00:35:27.322 [2024-12-14 22:44:48.191734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.191752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:27.322 [2024-12-14 22:44:48.200267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5a90 00:35:27.322 [2024-12-14 22:44:48.201530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.322 [2024-12-14 22:44:48.201548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.208952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee0630 00:35:27.596 [2024-12-14 22:44:48.209950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.218178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee49b0 00:35:27.596 [2024-12-14 22:44:48.219195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.219214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.227207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6b70 00:35:27.596 [2024-12-14 22:44:48.227758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.227777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.235667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeaab8 00:35:27.596 [2024-12-14 22:44:48.236554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.236574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.245222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef92c0 00:35:27.596 [2024-12-14 22:44:48.246259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.246278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.253543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee3060 00:35:27.596 [2024-12-14 22:44:48.254126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.254145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.263075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef8a50 00:35:27.596 [2024-12-14 22:44:48.263965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.272346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee23b8 00:35:27.596 [2024-12-14 22:44:48.273224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.273242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.281307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0ff8 00:35:27.596 [2024-12-14 22:44:48.281755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.290390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edf118 00:35:27.596 [2024-12-14 22:44:48.291064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.291082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.301383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb048 00:35:27.596 [2024-12-14 22:44:48.302851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.302873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.307925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee0630 00:35:27.596 [2024-12-14 22:44:48.308682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.308699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.318493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.596 [2024-12-14 22:44:48.319423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.319442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.327340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.596 [2024-12-14 22:44:48.328278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.328296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.336256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.596 [2024-12-14 22:44:48.337190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.337209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.345126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.596 [2024-12-14 22:44:48.346061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.346079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.354236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eea680 00:35:27.596 [2024-12-14 22:44:48.355365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.355384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.361483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efbcf0 00:35:27.596 [2024-12-14 22:44:48.362046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.596 [2024-12-14 22:44:48.362063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:27.596 [2024-12-14 22:44:48.372127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef2d80 00:35:27.596 [2024-12-14 22:44:48.373249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.373267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.380452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee4578 00:35:27.597 [2024-12-14 22:44:48.381486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.381504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.389585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eec408 00:35:27.597 [2024-12-14 22:44:48.390632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.397834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5ec8 00:35:27.597 [2024-12-14 22:44:48.398713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.398731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.406828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ede038 00:35:27.597 [2024-12-14 22:44:48.407721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.407739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.415103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb8b8 00:35:27.597 [2024-12-14 22:44:48.415887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.415907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.424234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee27f0 00:35:27.597 [2024-12-14 22:44:48.424931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.424949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.432492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef92c0 00:35:27.597 [2024-12-14 22:44:48.433259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.433276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.441493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeee38 00:35:27.597 [2024-12-14 22:44:48.442280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.442298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.449953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eebfd0 00:35:27.597 [2024-12-14 22:44:48.450633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.450650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.460355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee23b8 00:35:27.597 [2024-12-14 22:44:48.461372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.461390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:27.597 [2024-12-14 22:44:48.468757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0ff8 00:35:27.597 [2024-12-14 22:44:48.469592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.597 [2024-12-14 22:44:48.469610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.478381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef96f8 00:35:27.870 [2024-12-14 22:44:48.479553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.479570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.486999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efe2e8 00:35:27.870 [2024-12-14 22:44:48.487909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.496132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb480 00:35:27.870 [2024-12-14 22:44:48.496939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.496957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.504628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee3498 00:35:27.870 [2024-12-14 22:44:48.505434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.505452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.513805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eea680 00:35:27.870 [2024-12-14 22:44:48.514602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.514620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.522532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eec408 00:35:27.870 [2024-12-14 22:44:48.523322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.523339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.531803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7970 00:35:27.870 [2024-12-14 22:44:48.532692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.532713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.542720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eea680 00:35:27.870 [2024-12-14 22:44:48.544088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.544106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.550962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee0a68 00:35:27.870 [2024-12-14 22:44:48.551898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.870 [2024-12-14 22:44:48.551920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.870 [2024-12-14 22:44:48.559070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7970 00:35:27.870 [2024-12-14 22:44:48.560067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.560084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.568327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeaab8 00:35:27.871 [2024-12-14 22:44:48.569446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.569464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.577582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efdeb0 00:35:27.871 [2024-12-14 22:44:48.578858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.578876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.586698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efa7d8 00:35:27.871 [2024-12-14 22:44:48.587955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.587973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.595527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee4de8 00:35:27.871 [2024-12-14 22:44:48.596757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.596775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.603730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeff18 00:35:27.871 [2024-12-14 22:44:48.604540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.604558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.612918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0ff8 00:35:27.871 [2024-12-14 22:44:48.614049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.614070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.622215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eef6a8 00:35:27.871 [2024-12-14 22:44:48.623467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.630688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5ec8 00:35:27.871 [2024-12-14 22:44:48.631690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.631709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.639848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eea680 00:35:27.871 [2024-12-14 22:44:48.640894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.640915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.649121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ede8a8 00:35:27.871 [2024-12-14 22:44:48.650278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.650296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.657495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:27.871 [2024-12-14 22:44:48.658396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.658415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.666536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6cc8 00:35:27.871 [2024-12-14 22:44:48.667458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.667475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.676370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeff18 00:35:27.871 [2024-12-14 22:44:48.677361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.677379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.685231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efd640 00:35:27.871 [2024-12-14 22:44:48.686284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.686301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.695562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef5be8 00:35:27.871 [2024-12-14 22:44:48.697096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.697112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.701820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5220 00:35:27.871 [2024-12-14 22:44:48.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.702607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.713034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef92c0 00:35:27.871 [2024-12-14 22:44:48.714457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.714474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.722381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eec408 00:35:27.871 [2024-12-14 22:44:48.723938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.728640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee5c8 00:35:27.871 [2024-12-14 22:44:48.729405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.729423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.737884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edfdc0 00:35:27.871 [2024-12-14 22:44:48.738411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.738429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:27.871 [2024-12-14 22:44:48.747423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6cc8 00:35:27.871 [2024-12-14 22:44:48.748063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.871 [2024-12-14 22:44:48.748082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.756316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef9b30 00:35:28.139 [2024-12-14 22:44:48.757275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.757294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.765527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeaab8 00:35:28.139 [2024-12-14 22:44:48.766432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.766450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.776544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee190 00:35:28.139 [2024-12-14 22:44:48.778340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.778357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.783345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef31b8 00:35:28.139 [2024-12-14 22:44:48.784185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.784203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.792632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef92c0 00:35:28.139 [2024-12-14 22:44:48.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.793531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.801745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef2948 00:35:28.139 [2024-12-14 22:44:48.802739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.802757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.810350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef1868 00:35:28.139 [2024-12-14 22:44:48.811198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.811216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.821335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efc128 00:35:28.139 [2024-12-14 22:44:48.822737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.822756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.830609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee8088 00:35:28.139 [2024-12-14 22:44:48.832109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.832127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.836908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eebfd0 00:35:28.139 [2024-12-14 22:44:48.837530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.837548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.846975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb760 00:35:28.139 [2024-12-14 22:44:48.848062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.848084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.856237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee01f8 00:35:28.139 [2024-12-14 22:44:48.857451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.864612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef2d80 00:35:28.139 [2024-12-14 22:44:48.865467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.865485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.873508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efef90 00:35:28.139 [2024-12-14 22:44:48.874244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.874262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.882482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7970 00:35:28.139 [2024-12-14 22:44:48.883454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.883471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.892501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7970 00:35:28.139 [2024-12-14 22:44:48.893962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.893980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.901309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7818 00:35:28.139 [2024-12-14 22:44:48.902746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.902763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.910120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6fa8 00:35:28.139 [2024-12-14 22:44:48.911616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.911634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.916375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef3a28 00:35:28.139 [2024-12-14 22:44:48.917038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.917055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.139 [2024-12-14 22:44:48.924763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5ec8 00:35:28.139 [2024-12-14 22:44:48.925428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.139 [2024-12-14 22:44:48.925446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.933768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6cc8 00:35:28.140 [2024-12-14 22:44:48.934448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.934466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.943853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7818 00:35:28.140 [2024-12-14 22:44:48.944862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.953409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef9b30 00:35:28.140 [2024-12-14 22:44:48.954580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.954599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.962677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef1ca0 00:35:28.140 [2024-12-14 22:44:48.963977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.963995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.971179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef35f0 00:35:28.140 28272.00 IOPS, 110.44 MiB/s [2024-12-14T21:44:49.024Z] [2024-12-14 22:44:48.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.972365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.977776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeea00 00:35:28.140 [2024-12-14 22:44:48.978451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.978469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.987118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6300 00:35:28.140 [2024-12-14 22:44:48.987896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.987918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:48.998186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee9168 00:35:28.140 [2024-12-14 22:44:48.999397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:48.999415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:49.007114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee84c0 00:35:28.140 [2024-12-14 22:44:49.008354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:49.008372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.140 [2024-12-14 22:44:49.016653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7970 00:35:28.140 [2024-12-14 22:44:49.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.140 [2024-12-14 22:44:49.018021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.023222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee73e0 00:35:28.411 [2024-12-14 22:44:49.023929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.034416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee1f80 00:35:28.411 [2024-12-14 22:44:49.035510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.035529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.043337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef2d80 00:35:28.411 [2024-12-14 22:44:49.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.044180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.053439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ede470 00:35:28.411 [2024-12-14 22:44:49.054845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.054863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.060726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6fa8 00:35:28.411 [2024-12-14 22:44:49.061573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.061591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.069866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eec840 00:35:28.411 [2024-12-14 22:44:49.070375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.070394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.078894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef4298 00:35:28.411 [2024-12-14 22:44:49.079834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.079856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.088494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0350 00:35:28.411 [2024-12-14 22:44:49.089317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.089335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.098309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eec840 00:35:28.411 [2024-12-14 22:44:49.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.104816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6cc8 00:35:28.411 [2024-12-14 22:44:49.105395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.105413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.116318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0350 00:35:28.411 [2024-12-14 22:44:49.117679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.117698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.122774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efe2e8 00:35:28.411 [2024-12-14 22:44:49.123451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.123468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.132650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efa3a0 00:35:28.411 [2024-12-14 22:44:49.133469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.133488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.141972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee5c8 00:35:28.411 [2024-12-14 22:44:49.142973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.142991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.151495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efeb58 00:35:28.411 [2024-12-14 22:44:49.152438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.152459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.161272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eef6a8 00:35:28.411 [2024-12-14 22:44:49.162681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.162710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.167570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee4de8 00:35:28.411 [2024-12-14 22:44:49.168223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.411 [2024-12-14 22:44:49.168243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:28.411 [2024-12-14 22:44:49.176399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efd640 00:35:28.411 [2024-12-14 22:44:49.177092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.177111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.187250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7100 00:35:28.412 [2024-12-14 22:44:49.188493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.188511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.195511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7818 00:35:28.412 [2024-12-14 22:44:49.196436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.196454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.204218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efa7d8 00:35:28.412 [2024-12-14 22:44:49.205141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.205159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.213312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edece0 00:35:28.412 [2024-12-14 22:44:49.214026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.214044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.223570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eed920 00:35:28.412 [2024-12-14 22:44:49.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.225080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.229844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efdeb0 00:35:28.412 [2024-12-14 22:44:49.230429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.230448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.238856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efc128 00:35:28.412 [2024-12-14 22:44:49.239569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.239588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.248266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee4140 00:35:28.412 [2024-12-14 22:44:49.249072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.249091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.257664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5a90 00:35:28.412 [2024-12-14 22:44:49.258588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.258607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.267059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef4f40 00:35:28.412 [2024-12-14 22:44:49.268115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.268134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.277269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eddc00 00:35:28.412 [2024-12-14 22:44:49.278734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.278753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.412 [2024-12-14 22:44:49.283634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef57b0 00:35:28.412 [2024-12-14 22:44:49.284305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.412 [2024-12-14 22:44:49.284323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.292333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eed0b0 00:35:28.687 [2024-12-14 22:44:49.293018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.293036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.303571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6300 00:35:28.687 [2024-12-14 22:44:49.304741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.304759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.312600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eebb98 00:35:28.687 [2024-12-14 22:44:49.313325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.320987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5a90 00:35:28.687 [2024-12-14 22:44:49.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.322350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.330859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee3060 00:35:28.687 [2024-12-14 22:44:49.331974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.331993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.339290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5658 00:35:28.687 [2024-12-14 22:44:49.340346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.347584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efda78 00:35:28.687 [2024-12-14 22:44:49.348546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.348564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.356241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6b70 00:35:28.687 [2024-12-14 22:44:49.357174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.366206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eef6a8 00:35:28.687 [2024-12-14 22:44:49.367260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.367279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.375081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6fa8 00:35:28.687 [2024-12-14 22:44:49.376131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.376150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.383917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ede470 00:35:28.687 [2024-12-14 22:44:49.384966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.392781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef8a50 00:35:28.687 [2024-12-14 22:44:49.394067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.394084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.401895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb480 00:35:28.687 [2024-12-14 22:44:49.402946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.402964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.411049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee4578 00:35:28.687 [2024-12-14 22:44:49.412211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.412230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.418563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee8d30 00:35:28.687 [2024-12-14 22:44:49.419047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.419066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.428735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee95a0 00:35:28.687 [2024-12-14 22:44:49.429987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.430006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.437002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef57b0 00:35:28.687 [2024-12-14 22:44:49.437836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.437854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.446844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee38d0 00:35:28.687 [2024-12-14 22:44:49.448232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.687 [2024-12-14 22:44:49.448249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.687 [2024-12-14 22:44:49.456133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eff3c8 00:35:28.687 [2024-12-14 22:44:49.457664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.462490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee0ea0 00:35:28.688 [2024-12-14 22:44:49.463161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.463179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.471792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5220 00:35:28.688 [2024-12-14 22:44:49.472573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.472591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.481135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeaef0 00:35:28.688 [2024-12-14 22:44:49.482085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.482104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.490522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee49b0 00:35:28.688 [2024-12-14 22:44:49.491605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.491624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.499140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeff18 00:35:28.688 [2024-12-14 22:44:49.500186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.507479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef3e60 00:35:28.688 [2024-12-14 22:44:49.508182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.508200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.516332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef9b30 00:35:28.688 [2024-12-14 22:44:49.517033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.517052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.525260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eedd58 00:35:28.688 [2024-12-14 22:44:49.525943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.525960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.534114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef20d8 00:35:28.688 [2024-12-14 22:44:49.534791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.534809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.543004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeee38 00:35:28.688 [2024-12-14 22:44:49.543676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.543697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.551884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee99d8 00:35:28.688 [2024-12-14 22:44:49.552557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.552575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.688 [2024-12-14 22:44:49.560751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edf118 00:35:28.688 [2024-12-14 22:44:49.561491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.688 [2024-12-14 22:44:49.561510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.569894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee27f0 00:35:28.965 [2024-12-14 22:44:49.570611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.965 [2024-12-14 22:44:49.570629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.579030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eefae0 00:35:28.965 [2024-12-14 22:44:49.579742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.965 [2024-12-14 22:44:49.579761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.587960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb760 00:35:28.965 [2024-12-14 22:44:49.588629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.965 [2024-12-14 22:44:49.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.596836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee49b0 00:35:28.965 [2024-12-14 22:44:49.597569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.965 [2024-12-14 22:44:49.597586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.605938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef1430 00:35:28.965 [2024-12-14 22:44:49.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.965 [2024-12-14 22:44:49.606440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.965 [2024-12-14 22:44:49.616116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee9168 00:35:28.965 [2024-12-14 22:44:49.617419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.617437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.625412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef5378 00:35:28.966 [2024-12-14 22:44:49.626797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.626815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.633698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efc128 00:35:28.966 [2024-12-14 22:44:49.634718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.634736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.641804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee95a0 00:35:28.966 [2024-12-14 22:44:49.642815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.642834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.650062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0350 00:35:28.966 [2024-12-14 22:44:49.650732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.650750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.658811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efdeb0 00:35:28.966 [2024-12-14 22:44:49.659481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.659500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.667678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef1ca0 00:35:28.966 [2024-12-14 22:44:49.668345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.668363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.676511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eebfd0 00:35:28.966 [2024-12-14 22:44:49.677198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.677216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.685394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eef6a8 00:35:28.966 [2024-12-14 22:44:49.686061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.686079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.694291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeb328 00:35:28.966 [2024-12-14 22:44:49.694972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.694991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.703151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef6458 00:35:28.966 [2024-12-14 22:44:49.703814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.703832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.711972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efeb58 00:35:28.966 [2024-12-14 22:44:49.712637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.720819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee3060 00:35:28.966 [2024-12-14 22:44:49.721488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.721506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.729688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efb8b8 00:35:28.966 [2024-12-14 22:44:49.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.730372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.738577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eee5c8 00:35:28.966 [2024-12-14 22:44:49.739244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.739262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.748593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeff18 00:35:28.966 [2024-12-14 22:44:49.749785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.749803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.756651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef0ff8 00:35:28.966 [2024-12-14 22:44:49.757152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.757170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.765737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef4298 00:35:28.966 [2024-12-14 22:44:49.766536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.766554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.774215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eeee38 00:35:28.966 [2024-12-14 22:44:49.775016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.775040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.784389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee88f8 00:35:28.966 [2024-12-14 22:44:49.785332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.785351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.793310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efda78 00:35:28.966 [2024-12-14 22:44:49.794237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.803279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee38d0 00:35:28.966 [2024-12-14 22:44:49.804626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.811542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edf988 00:35:28.966 [2024-12-14 22:44:49.812545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.812563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.820285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efc560 00:35:28.966 [2024-12-14 22:44:49.821297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.821315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.829175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef7da8 00:35:28.966 [2024-12-14 22:44:49.830197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.830215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.966 [2024-12-14 22:44:49.838195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6300 00:35:28.966 [2024-12-14 22:44:49.839260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.966 [2024-12-14 22:44:49.839278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.847299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee7818 00:35:29.251 [2024-12-14 22:44:49.848341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.848359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.855766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ede470 00:35:29.251 [2024-12-14 22:44:49.857091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.857109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.864229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016eedd58 00:35:29.251 [2024-12-14 22:44:49.864914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.872651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee01f8 00:35:29.251 [2024-12-14 22:44:49.873291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.873308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.881979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee3d08 00:35:29.251 [2024-12-14 22:44:49.882728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.882746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.891886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016edf550 00:35:29.251 [2024-12-14 22:44:49.892780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.892799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.901044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6fa8 00:35:29.251 [2024-12-14 22:44:49.902048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.902066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.910364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efe720 00:35:29.251 [2024-12-14 22:44:49.911482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.911500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.917877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee5658 00:35:29.251 [2024-12-14 22:44:49.918451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.918469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.927901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6300 00:35:29.251 [2024-12-14 22:44:49.929014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.929033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.936428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ef57b0 00:35:29.251 [2024-12-14 22:44:49.937531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.251 [2024-12-14 22:44:49.937549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:29.251 [2024-12-14 22:44:49.944695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee6fa8 00:35:29.251 [2024-12-14 22:44:49.945460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.252 [2024-12-14 22:44:49.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:29.252 [2024-12-14 22:44:49.953438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efdeb0 00:35:29.252 [2024-12-14 22:44:49.954224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.252 [2024-12-14 22:44:49.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:29.252 [2024-12-14 22:44:49.962311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016efd208 00:35:29.252 [2024-12-14 22:44:49.963077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.252 [2024-12-14 22:44:49.963095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:29.252 [2024-12-14 22:44:49.971176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e0e0) with pdu=0x200016ee23b8 00:35:29.252 [2024-12-14 22:44:49.971934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.252 [2024-12-14 22:44:49.971952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:29.252 28430.00 IOPS, 111.05 MiB/s 00:35:29.252 Latency(us) 00:35:29.252 [2024-12-14T21:44:50.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.252 nvme0n1 : 2.00 28443.43 111.11 0.00 0.00 4494.50 1966.08 13232.03 00:35:29.252 [2024-12-14T21:44:50.136Z] =================================================================================================================== 00:35:29.252 [2024-12-14T21:44:50.136Z] Total : 28443.43 111.11 0.00 0.00 4494.50 1966.08 13232.03 00:35:29.252 { 00:35:29.252 "results": [ 00:35:29.252 { 00:35:29.252 "job": "nvme0n1", 00:35:29.252 "core_mask": "0x2", 00:35:29.252 "workload": "randwrite", 00:35:29.252 "status": "finished", 00:35:29.252 "queue_depth": 128, 00:35:29.252 "io_size": 4096, 00:35:29.252 "runtime": 2.003556, 00:35:29.252 "iops": 28443.42758575253, 00:35:29.252 "mibps": 111.10713900684583, 00:35:29.252 "io_failed": 0, 00:35:29.252 "io_timeout": 0, 00:35:29.252 "avg_latency_us": 4494.499658875552, 00:35:29.252 "min_latency_us": 1966.08, 00:35:29.252 "max_latency_us": 13232.030476190475 00:35:29.252 } 00:35:29.252 ], 00:35:29.252 "core_count": 1 00:35:29.252 } 00:35:29.252 22:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:29.252 22:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:29.252 22:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:29.252 | .driver_specific 00:35:29.252 | .nvme_error 00:35:29.252 | .status_code 00:35:29.252 | .command_transient_transport_error' 00:35:29.252 22:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530949 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530949 ']' 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530949 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530949 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530949' 00:35:29.534 killing process with pid 530949 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530949 00:35:29.534 Received shutdown signal, test time was about 2.000000 seconds 00:35:29.534 00:35:29.534 Latency(us) 00:35:29.534 [2024-12-14T21:44:50.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.534 [2024-12-14T21:44:50.418Z] =================================================================================================================== 00:35:29.534 [2024-12-14T21:44:50.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530949 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531530 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531530 /var/tmp/bperf.sock 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531530 ']' 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.534 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.815 [2024-12-14 22:44:50.453641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:29.815 [2024-12-14 22:44:50.453686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531530 ] 00:35:29.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.815 Zero copy mechanism will not be used. 00:35:29.815 [2024-12-14 22:44:50.527869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.815 [2024-12-14 22:44:50.550144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.815 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.815 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:29.815 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.815 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.090 22:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.349 nvme0n1 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:30.349 22:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:30.610 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.610 Zero copy mechanism will not be used. 00:35:30.610 Running I/O for 2 seconds... 00:35:30.610 [2024-12-14 22:44:51.255621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.255727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.255755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.260782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.260845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.260867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.265364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.265419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.265443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.269964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.270032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.270051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.274491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.274559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.274578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.279009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.279066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.279085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.283550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.283619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.283638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.288084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.288152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.288170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.292560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.292622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.292640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.296953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.297031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.297050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.301363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.301422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.301440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.305740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.305810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.305829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.310127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.310182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.314513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.314578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.318883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.318966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.610 [2024-12-14 22:44:51.318985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.610 [2024-12-14 22:44:51.323270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.610 [2024-12-14 22:44:51.323335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.323352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.327701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.327762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.327780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.332056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.332107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.332126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.336457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.340855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.340936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.345216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.345275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.345293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.349511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.349566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.349585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.353851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.353926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.353944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.358360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.358494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.358512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.363149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.363208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.363226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.367917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.368030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.368047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.372750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.372846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.372864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.378373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.378471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.378490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.383160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.383219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.388170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.388249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.392765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.392875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.392892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.397335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.397403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.397422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.401993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.402046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.406634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.406719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.411243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.411312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.415965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.416032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.416050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.420548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.420602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.420621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.425277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.425531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.425549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.430672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.430743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.430760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.436159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.436237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.436255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.440787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.440892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.440916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.445514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.445588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.445605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.450171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.611 [2024-12-14 22:44:51.450268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.611 [2024-12-14 22:44:51.450285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.611 [2024-12-14 22:44:51.454486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.458809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.458870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.458889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.463095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.463162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.463181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.467460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.467514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.467533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.471937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.471998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.472017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.476732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.476812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.476830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.482443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.482534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.482553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.612 [2024-12-14 22:44:51.488337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.612 [2024-12-14 22:44:51.488398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.612 [2024-12-14 22:44:51.488417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.872 [2024-12-14 22:44:51.493082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.872 [2024-12-14 22:44:51.493162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.872 [2024-12-14 22:44:51.493181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.872 [2024-12-14 22:44:51.497796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.872 [2024-12-14 22:44:51.497859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.872 [2024-12-14 22:44:51.497878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.872 [2024-12-14 22:44:51.502532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.872 [2024-12-14 22:44:51.502586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.872 [2024-12-14 22:44:51.502604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.872 [2024-12-14 22:44:51.507325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.872 [2024-12-14 22:44:51.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.872 [2024-12-14 22:44:51.507421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.511850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.511913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.511931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.516692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.516745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.516763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.521369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.521437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.521456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.526286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.526373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.531613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.531678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.531697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.536440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.536534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.541321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.541401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.546192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.546250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.546268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.552313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.552382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.552400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.557531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.557592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.557610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.562265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.562372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.562390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.566888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.566961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.566979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.571389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.571445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.571463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.575971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.576032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.576049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.580523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.580587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.580605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.585357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.585425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.585444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.590002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.590063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.590081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.594793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.594871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.599464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.599544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.599563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.604139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.604207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.604227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.608857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.608919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.608936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.613412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.613469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.613487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.618203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.618337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.622944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.623022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.623041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.627470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.627540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.627558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.631889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.631964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.631986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.636699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.636786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.873 [2024-12-14 22:44:51.636806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.873 [2024-12-14 22:44:51.641306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.873 [2024-12-14 22:44:51.641361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.641379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.646706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.646756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.646775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.651925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.651990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.652009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.656745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.656804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.656822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.661566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.661625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.661644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.666267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.666322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.666340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.670943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.671000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.671018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.675519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.675574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.675592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.680194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.680248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.680267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.684519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.684584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.684601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.689126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.689186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.689203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.694215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.694268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.694285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.700111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.700213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.700232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.705211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.705267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.705285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.710123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.710177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.710196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.714830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.714923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.714941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.719543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.719672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.719691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.724282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.724370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.724388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.729392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.729565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.729583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.735985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.736110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.736128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.742348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.742430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.742449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.748471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.748569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.748587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.874 [2024-12-14 22:44:51.753520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:30.874 [2024-12-14 22:44:51.753664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.874 [2024-12-14 22:44:51.753683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.758808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.758897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.765305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.765435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.765458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.772814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.772973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.772993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.779762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.779951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.779971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.787126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.787285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.787304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.794048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.794206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.794225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.801656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.801783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.801802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.808880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.809034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.809053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.816015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.816276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.816295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.822814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.823055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.823076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.828579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.828802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.828821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.834794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.835069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.835089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.840896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.841168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.841188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.846132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.846370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.846388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.851293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.851537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.856064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.856308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.856327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.860750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.860994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.861013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.866310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.866547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.866567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.871066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.871301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.871320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.875611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.875859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.875879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.880028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.880281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.880301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.884471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.884719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.884738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.136 [2024-12-14 22:44:51.889251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.136 [2024-12-14 22:44:51.889504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.136 [2024-12-14 22:44:51.889524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.894059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.894286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.894305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.898739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.899002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.899021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.903393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.903654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.907772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.908029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.908048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.912736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.912999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.917613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.917859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.917878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.922127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.922384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.922403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.926971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.927212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.927231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.931980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.932225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.932245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.936631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.936871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.936890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.941333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.941584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.941603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.945639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.945885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.945910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.949729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.950005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.953779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.954029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.954048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.957853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.958115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.958134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.961962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.962208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.962229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.966028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.966278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.966297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.970095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.970347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.970367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.974151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.974394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.974414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.978173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.978437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.978456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.982249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.982535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.986296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.986546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.990327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.990581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.990600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.994383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.994630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.994649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:51.998430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:51.998681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:51.998700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:52.002460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:52.002709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:52.002729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:52.006483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:52.006731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:52.006750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.137 [2024-12-14 22:44:52.010515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.137 [2024-12-14 22:44:52.010773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.137 [2024-12-14 22:44:52.010792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.138 [2024-12-14 22:44:52.014888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.138 [2024-12-14 22:44:52.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.138 [2024-12-14 22:44:52.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.019531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.019785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.019805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.023748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.023985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.024007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.028206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.028422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.028441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.032741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.032940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.032958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.037407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.037587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.037604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.042165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.042386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.042404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.046852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.398 [2024-12-14 22:44:52.047038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.398 [2024-12-14 22:44:52.047055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.398 [2024-12-14 22:44:52.051748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.051941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.056303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.056500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.060327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.060520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.060537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.064083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.064284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.064302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.067934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.068110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.068127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.071937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.072121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.072138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.076015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.076211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.076230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.079949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.080155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.080172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.083822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.084027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.084052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.087537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.087734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.087753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.091280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.091489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.091512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.095051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.095260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.095279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.098775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.098974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.098991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.102514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.102710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.106226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.106428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.106445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.109901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.110102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.110127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.113612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.113815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.113833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.117289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.117503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.117521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.120995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.121201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.121219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.124691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.124900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.124925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.128395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.128584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.128604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.132082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.132290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.132309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.135721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.135926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.135943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.139596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.139819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.139838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.143310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.143516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.143543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.146975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.147160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.147178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.150677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.150877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.150896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.399 [2024-12-14 22:44:52.154346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.399 [2024-12-14 22:44:52.154541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.399 [2024-12-14 22:44:52.154558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.158004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.158194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.158211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.161956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.162172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.162191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.166639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.166847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.166865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.172113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.172400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.177138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.177387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.177405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.181623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.186250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.186464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.190462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.190667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.190686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.194761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.195015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.195034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.198802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.199028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.199046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.203212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.203428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.203447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.207204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.207405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.207424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.211281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.211479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.215154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.215360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.215379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.219354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.219698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.219717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.224783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.225111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.225130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.229346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.229549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.229567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.233454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.233654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.233673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.237583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.237787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.237809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.241637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.241868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.241887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.246019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.246249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.246267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.250080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.250271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.250288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.254836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.256229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.256247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.400 6634.00 IOPS, 829.25 MiB/s [2024-12-14T21:44:52.284Z] [2024-12-14 22:44:52.260877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.261140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.265900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.266084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.266103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.271573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.271805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.271824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.400 [2024-12-14 22:44:52.277093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.400 [2024-12-14 22:44:52.277286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.400 [2024-12-14 22:44:52.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.661 [2024-12-14 22:44:52.282810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.661 [2024-12-14 22:44:52.283075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.661 [2024-12-14 22:44:52.283094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.661 [2024-12-14 22:44:52.288369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.661 [2024-12-14 22:44:52.288530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.661 [2024-12-14 22:44:52.288548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.661 [2024-12-14 22:44:52.294386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.661 [2024-12-14 22:44:52.294573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.661 [2024-12-14 22:44:52.294591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.661 [2024-12-14 22:44:52.300936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.661 [2024-12-14 22:44:52.301200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.661 [2024-12-14 22:44:52.301220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.661 [2024-12-14 22:44:52.307222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.661 [2024-12-14 22:44:52.307407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.661 [2024-12-14 22:44:52.307425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.312180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.312366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.312383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.316831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.316972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.316990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.321464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.321611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.321629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.326402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.326556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.326573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.332434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.332690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.332709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.338740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.338918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.338937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.344921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.345013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.351852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.351981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.351999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.357937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.358185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.358205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.364564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.364704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.364722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.371317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.371565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.371583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.377821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.378033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.378056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.384220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.384408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.384439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.390596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.390882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.390908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.396984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.397306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.403565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.403855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.409890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.410188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.410208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.416327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.416601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.416621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.422611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.422841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.422861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.427156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.427340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.430955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.431142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.431160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.434781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.434975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.434994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.438633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.438819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.438838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.442471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.442657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.442676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.446282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.446468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.446487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.450043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.450243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.453806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.453997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.454016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.662 [2024-12-14 22:44:52.457554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.662 [2024-12-14 22:44:52.457739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.662 [2024-12-14 22:44:52.457760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.461282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.461463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.461483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.465025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.465207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.465227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.468761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.468950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.468969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.472501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.472687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.472707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.476242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.476430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.476450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.479980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.480167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.483712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.483918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.483937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.487721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.487909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.487928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.492796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.492993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.493012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.497529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.497726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.501476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.501662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.501683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.505447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.505630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.505649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.509365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.509545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.509564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.513425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.513624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.513643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.517405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.517594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.521398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.521581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.521601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.525693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.525925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.525946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.531058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.531293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.531312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.536672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.536984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.537004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.663 [2024-12-14 22:44:52.542839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.663 [2024-12-14 22:44:52.543143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.663 [2024-12-14 22:44:52.543164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.548831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.549006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.549025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.554246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.554473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.554493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.559991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.560203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.560223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.565421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.565584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.565603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.571488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.571652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.571670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.577622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.577837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.577856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.583244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.583427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.583446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.588750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.588938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.588957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.594773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.595010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.595031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.600782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.601000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.601020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.606864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.607092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.607112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.612824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.613044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.613064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.618885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.619104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.619128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.624323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.624486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.630638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.630857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.630877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.636545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.636773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.642161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.642374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.642398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.648157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.648335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.648354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.653636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.653809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.653827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.659339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.659538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.659558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.665404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.665589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.665608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.670966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.671161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.671180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.924 [2024-12-14 22:44:52.676611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.924 [2024-12-14 22:44:52.676833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.924 [2024-12-14 22:44:52.676853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.682596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.682771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.682790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.688699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.688934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.688953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.694023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.694203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.694222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.698044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.698211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.698229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.701995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.702166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.702184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.705796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.705969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.705988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.709700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.709867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.709886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.713650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.713837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.717574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.717741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.717759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.721579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.721750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.721768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.725522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.725684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.725703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.729385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.729549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.729567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.733322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.733493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.733511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.737043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.737230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.740764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.740955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.740973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.744548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.744716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.744734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.748273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.748440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.751972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.752144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.755659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.755825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.755843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.759392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.759559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.759581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.763110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.763278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.763296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.766812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.767001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.767019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.770640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.770814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.770833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.774445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.774619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.774638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.778407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.778577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.778596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.782215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.782397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.782416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.786012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.786196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.925 [2024-12-14 22:44:52.786215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.925 [2024-12-14 22:44:52.789774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.925 [2024-12-14 22:44:52.789969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.926 [2024-12-14 22:44:52.789987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.926 [2024-12-14 22:44:52.793527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.926 [2024-12-14 22:44:52.793702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.926 [2024-12-14 22:44:52.793721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.926 [2024-12-14 22:44:52.797232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.926 [2024-12-14 22:44:52.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.926 [2024-12-14 22:44:52.797419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.926 [2024-12-14 22:44:52.800963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.926 [2024-12-14 22:44:52.801135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.926 [2024-12-14 22:44:52.801153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.926 [2024-12-14 22:44:52.804704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:31.926 [2024-12-14 22:44:52.804886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.926 [2024-12-14 22:44:52.804911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.186 [2024-12-14 22:44:52.808495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.186 [2024-12-14 22:44:52.808688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.186 [2024-12-14 22:44:52.808706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.186 [2024-12-14 22:44:52.812317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.186 [2024-12-14 22:44:52.812487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.186 [2024-12-14 22:44:52.812506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.186 [2024-12-14 22:44:52.816141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.186 [2024-12-14 22:44:52.816310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.186 [2024-12-14 22:44:52.816330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.186 [2024-12-14 22:44:52.819962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.186 [2024-12-14 22:44:52.820148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.186 [2024-12-14 22:44:52.820167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.186 [2024-12-14 22:44:52.824519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.186 [2024-12-14 22:44:52.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.824707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.828653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.828843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.833309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.833492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.833511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.838366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.838538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.838556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.842457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.842623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.842641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.846410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.846577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.846596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.850220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.850385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.850403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.854274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.854457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.858206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.858374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.858393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.862317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.862487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.862509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.866301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.866468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.866486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.870220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.870389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.870407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.874155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.874323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.874341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.878040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.878208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.878226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.881951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.882120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.882138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.885790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.885966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.885985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.889560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.889752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.893515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.893684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.893703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.897409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.897575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.897594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.901426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.901591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.901609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.905327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.905511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.905529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.909898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.910077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.910096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.913875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.914045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.914063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.917753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.917924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.917942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.921566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.921733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.921751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.925459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.925628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.925646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.929292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.929467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.929485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.933175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.187 [2024-12-14 22:44:52.933338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.187 [2024-12-14 22:44:52.933356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.187 [2024-12-14 22:44:52.937464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.937627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.937646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.941310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.941492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.941510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.945212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.945380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.945398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.949039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.949206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.949224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.953148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.953322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.953341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.957000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.957170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.960866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.961037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.961055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.964647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.964818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.964840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.968777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.968953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.968971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.972545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.972713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.972730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.976278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.976449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.976467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.980240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.980407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.980426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.984709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.984918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.984937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.989495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.989663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.989682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.993247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.993406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.993425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:52.997170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:52.997332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:52.997351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.001384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.001573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.005265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.005446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.009158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.009320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.009338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.012935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.013117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.016503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.016674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.016693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.020250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.020432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.020452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.024881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.025066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.025085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.029361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.029538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.029556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.033296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.033476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.033495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.037210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.037399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.037418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.041129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.041309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.041330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.045039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.188 [2024-12-14 22:44:53.045228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.188 [2024-12-14 22:44:53.045247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.188 [2024-12-14 22:44:53.048993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.189 [2024-12-14 22:44:53.049177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.189 [2024-12-14 22:44:53.049196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.189 [2024-12-14 22:44:53.052837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.189 [2024-12-14 22:44:53.053029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.189 [2024-12-14 22:44:53.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.189 [2024-12-14 22:44:53.056831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.189 [2024-12-14 22:44:53.057007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.189 [2024-12-14 22:44:53.057025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.189 [2024-12-14 22:44:53.060695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.189 [2024-12-14 22:44:53.060858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.189 [2024-12-14 22:44:53.060876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.189 [2024-12-14 22:44:53.064697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.189 [2024-12-14 22:44:53.064870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.189 [2024-12-14 22:44:53.064889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.189 [2024-12-14 22:44:53.068708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.449 [2024-12-14 22:44:53.068880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.449 [2024-12-14 22:44:53.068914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.072581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.072764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.076706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.076908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.076927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.080559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.080733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.080752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.084321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.084489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.084508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.088274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.088444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.092933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.093122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.093140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.097295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.097464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.097483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.101814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.101986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.102004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.105868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.106051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.106071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.109741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.109912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.109930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.113556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.113725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.113744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.117394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.117563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.117581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.121244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.121426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.121445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.125146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.125316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.125334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.129152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.129319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.129337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.133175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.133339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.133357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.137005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.137194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.140828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.140997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.141015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.144657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.144830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.144848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.148598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.148763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.148781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.152528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.152692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.152710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.156353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.156522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.156540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.160244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.160417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.160436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.164198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.164365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.164384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.168066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.168232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.168250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.171892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.172098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.175755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.450 [2024-12-14 22:44:53.175935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.450 [2024-12-14 22:44:53.175954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.450 [2024-12-14 22:44:53.179527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.179693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.183491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.183658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.183676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.187469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.187643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.187661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.192588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.192779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.192797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.197058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.197232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.197250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.200995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.201163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.201181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.204885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.205057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.205075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.208635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.208808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.208825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.212358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.212523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.212540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.216064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.216249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.216267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.219769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.219946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.219965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.223541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.223710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.223729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.227253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.227422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.227441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.231055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.231231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.231249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.234970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.235140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.235158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.238886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.239066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.239085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.242773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.242961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.242980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.246871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.247050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.247068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.250853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.251039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.251058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.451 [2024-12-14 22:44:53.254816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.254988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.255007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.451 6814.00 IOPS, 851.75 MiB/s [2024-12-14T21:44:53.335Z] [2024-12-14 22:44:53.259597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x194e5c0) with pdu=0x200016eff3c8 00:35:32.451 [2024-12-14 22:44:53.259719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.451 [2024-12-14 22:44:53.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.451 00:35:32.451 Latency(us) 00:35:32.451 [2024-12-14T21:44:53.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.451 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:32.451 nvme0n1 : 2.00 6811.84 851.48 0.00 0.00 2344.77 1747.63 7583.45 00:35:32.451 [2024-12-14T21:44:53.335Z] =================================================================================================================== 00:35:32.451 [2024-12-14T21:44:53.335Z] Total : 6811.84 851.48 0.00 0.00 2344.77 1747.63 7583.45 00:35:32.451 { 00:35:32.451 "results": [ 00:35:32.451 { 00:35:32.451 "job": "nvme0n1", 00:35:32.451 "core_mask": "0x2", 00:35:32.451 "workload": "randwrite", 00:35:32.451 "status": "finished", 00:35:32.451 "queue_depth": 16, 00:35:32.451 "io_size": 131072, 00:35:32.451 "runtime": 2.002983, 00:35:32.451 "iops": 6811.840140430548, 00:35:32.451 "mibps": 851.4800175538185, 00:35:32.451 "io_failed": 0, 00:35:32.451 "io_timeout": 0, 00:35:32.451 "avg_latency_us": 2344.772586170792, 00:35:32.451 "min_latency_us": 1747.6266666666668, 00:35:32.451 "max_latency_us": 7583.451428571429 00:35:32.451 } 00:35:32.451 ], 00:35:32.451 "core_count": 1 00:35:32.451 } 00:35:32.451 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:32.451 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:32.451 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:32.451 | .driver_specific 00:35:32.451 | .nvme_error 00:35:32.451 | .status_code 00:35:32.451 | .command_transient_transport_error' 00:35:32.451 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 441 > 0 )) 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531530 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531530 ']' 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531530 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531530 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531530' 00:35:32.711 killing process with pid 531530 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531530 00:35:32.711 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.711 00:35:32.711 Latency(us) 00:35:32.711 [2024-12-14T21:44:53.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.711 [2024-12-14T21:44:53.595Z] =================================================================================================================== 00:35:32.711 [2024-12-14T21:44:53.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.711 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531530 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 529793 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529793 ']' 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529793 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529793 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529793' 00:35:32.970 killing process with pid 529793 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529793 00:35:32.970 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529793 00:35:33.230 00:35:33.230 real 0m13.846s 00:35:33.230 user 0m26.429s 00:35:33.230 sys 0m4.642s 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.230 ************************************ 00:35:33.230 END TEST nvmf_digest_error 00:35:33.230 ************************************ 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.230 rmmod nvme_tcp 00:35:33.230 rmmod nvme_fabrics 00:35:33.230 rmmod nvme_keyring 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 529793 ']' 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 529793 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 529793 ']' 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 529793 00:35:33.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (529793) - No such process 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 529793 is not found' 00:35:33.230 Process with pid 529793 is not found 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:33.230 22:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.769 00:35:35.769 real 0m36.120s 00:35:35.769 user 0m54.642s 00:35:35.769 sys 0m13.802s 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.769 ************************************ 00:35:35.769 END TEST nvmf_digest 00:35:35.769 ************************************ 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.769 ************************************ 00:35:35.769 START TEST nvmf_bdevperf 00:35:35.769 ************************************ 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.769 * Looking for test storage... 00:35:35.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.769 --rc genhtml_branch_coverage=1 00:35:35.769 --rc genhtml_function_coverage=1 00:35:35.769 --rc genhtml_legend=1 00:35:35.769 --rc geninfo_all_blocks=1 00:35:35.769 --rc geninfo_unexecuted_blocks=1 00:35:35.769 00:35:35.769 ' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.769 --rc genhtml_branch_coverage=1 00:35:35.769 --rc genhtml_function_coverage=1 00:35:35.769 --rc genhtml_legend=1 00:35:35.769 --rc geninfo_all_blocks=1 00:35:35.769 --rc geninfo_unexecuted_blocks=1 00:35:35.769 00:35:35.769 ' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.769 --rc genhtml_branch_coverage=1 00:35:35.769 --rc genhtml_function_coverage=1 00:35:35.769 --rc genhtml_legend=1 00:35:35.769 --rc geninfo_all_blocks=1 00:35:35.769 --rc geninfo_unexecuted_blocks=1 00:35:35.769 00:35:35.769 ' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.769 --rc genhtml_branch_coverage=1 00:35:35.769 --rc genhtml_function_coverage=1 00:35:35.769 --rc genhtml_legend=1 00:35:35.769 --rc geninfo_all_blocks=1 00:35:35.769 --rc geninfo_unexecuted_blocks=1 00:35:35.769 00:35:35.769 ' 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.769 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:35.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:35.770 22:44:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:41.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:41.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:41.051 Found net devices under 0000:af:00.0: cvl_0_0 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:41.051 Found net devices under 0000:af:00.1: cvl_0_1 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.051 22:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:41.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:35:41.311 00:35:41.311 --- 10.0.0.2 ping statistics --- 00:35:41.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.311 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:35:41.311 00:35:41.311 --- 10.0.0.1 ping statistics --- 00:35:41.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.311 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:41.311 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=535688 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 535688 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 535688 ']' 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.571 [2024-12-14 22:45:02.255482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:41.571 [2024-12-14 22:45:02.255528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.571 [2024-12-14 22:45:02.331877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:41.571 [2024-12-14 22:45:02.354349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.571 [2024-12-14 22:45:02.354385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.571 [2024-12-14 22:45:02.354392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.571 [2024-12-14 22:45:02.354398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.571 [2024-12-14 22:45:02.354404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.571 [2024-12-14 22:45:02.355765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.571 [2024-12-14 22:45:02.355873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.571 [2024-12-14 22:45:02.355875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:41.571 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 [2024-12-14 22:45:02.486537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 Malloc0 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.830 [2024-12-14 22:45:02.551233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:41.830 { 00:35:41.830 "params": { 00:35:41.830 "name": "Nvme$subsystem", 00:35:41.830 "trtype": "$TEST_TRANSPORT", 00:35:41.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.830 "adrfam": "ipv4", 00:35:41.830 "trsvcid": "$NVMF_PORT", 00:35:41.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.830 "hdgst": ${hdgst:-false}, 00:35:41.830 "ddgst": ${ddgst:-false} 00:35:41.830 }, 00:35:41.830 "method": "bdev_nvme_attach_controller" 00:35:41.830 } 00:35:41.830 EOF 00:35:41.830 )") 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:41.830 22:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:41.830 "params": { 00:35:41.830 "name": "Nvme1", 00:35:41.830 "trtype": "tcp", 00:35:41.830 "traddr": "10.0.0.2", 00:35:41.830 "adrfam": "ipv4", 00:35:41.830 "trsvcid": "4420", 00:35:41.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:41.830 "hdgst": false, 00:35:41.830 "ddgst": false 00:35:41.830 }, 00:35:41.830 "method": "bdev_nvme_attach_controller" 00:35:41.830 }' 00:35:41.830 [2024-12-14 22:45:02.599553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:41.830 [2024-12-14 22:45:02.599598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535719 ] 00:35:41.830 [2024-12-14 22:45:02.675150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.830 [2024-12-14 22:45:02.697534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.397 Running I/O for 1 seconds... 00:35:43.334 11249.00 IOPS, 43.94 MiB/s 00:35:43.334 Latency(us) 00:35:43.335 [2024-12-14T21:45:04.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.335 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:43.335 Verification LBA range: start 0x0 length 0x4000 00:35:43.335 Nvme1n1 : 1.01 11311.92 44.19 0.00 0.00 11261.40 1162.48 12857.54 00:35:43.335 [2024-12-14T21:45:04.219Z] =================================================================================================================== 00:35:43.335 [2024-12-14T21:45:04.219Z] Total : 11311.92 44.19 0.00 0.00 11261.40 1162.48 12857.54 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=535942 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:43.335 { 00:35:43.335 "params": { 00:35:43.335 "name": "Nvme$subsystem", 00:35:43.335 "trtype": "$TEST_TRANSPORT", 00:35:43.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.335 "adrfam": "ipv4", 00:35:43.335 "trsvcid": "$NVMF_PORT", 00:35:43.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.335 "hdgst": ${hdgst:-false}, 00:35:43.335 "ddgst": ${ddgst:-false} 00:35:43.335 }, 00:35:43.335 "method": "bdev_nvme_attach_controller" 00:35:43.335 } 00:35:43.335 EOF 00:35:43.335 )") 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:43.335 22:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:43.335 "params": { 00:35:43.335 "name": "Nvme1", 00:35:43.335 "trtype": "tcp", 00:35:43.335 "traddr": "10.0.0.2", 00:35:43.335 "adrfam": "ipv4", 00:35:43.335 "trsvcid": "4420", 00:35:43.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.335 "hdgst": false, 00:35:43.335 "ddgst": false 00:35:43.335 }, 00:35:43.335 "method": "bdev_nvme_attach_controller" 00:35:43.335 }' 00:35:43.335 [2024-12-14 22:45:04.218376] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:43.335 [2024-12-14 22:45:04.218438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535942 ] 00:35:43.594 [2024-12-14 22:45:04.293219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.594 [2024-12-14 22:45:04.313353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.854 Running I/O for 15 seconds... 00:35:46.167 11456.00 IOPS, 44.75 MiB/s [2024-12-14T21:45:07.314Z] 11465.00 IOPS, 44.79 MiB/s [2024-12-14T21:45:07.314Z] 22:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 535688 00:35:46.430 22:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:46.430 [2024-12-14 22:45:07.189391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.430 [2024-12-14 22:45:07.189868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.430 [2024-12-14 22:45:07.189876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.189883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.189894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.431 [2024-12-14 22:45:07.190601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.431 [2024-12-14 22:45:07.190609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.432 [2024-12-14 22:45:07.190917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.190991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.190999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.432 [2024-12-14 22:45:07.191184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.432 [2024-12-14 22:45:07.191192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.433 [2024-12-14 22:45:07.191491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.191499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edccb0 is same with the state(6) to be set 00:35:46.433 [2024-12-14 22:45:07.191507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:46.433 [2024-12-14 22:45:07.191512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:46.433 [2024-12-14 22:45:07.191518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102992 len:8 PRP1 0x0 PRP2 0x0 00:35:46.433 [2024-12-14 22:45:07.191531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.433 [2024-12-14 22:45:07.194378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.433 [2024-12-14 22:45:07.194433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.433 [2024-12-14 22:45:07.194914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.433 [2024-12-14 22:45:07.194930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.433 [2024-12-14 22:45:07.194938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.433 [2024-12-14 22:45:07.195113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.433 [2024-12-14 22:45:07.195286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.433 [2024-12-14 22:45:07.195295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.433 [2024-12-14 22:45:07.195302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.433 [2024-12-14 22:45:07.195309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.433 [2024-12-14 22:45:07.207666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.433 [2024-12-14 22:45:07.208027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.433 [2024-12-14 22:45:07.208046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.433 [2024-12-14 22:45:07.208053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.433 [2024-12-14 22:45:07.208228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.433 [2024-12-14 22:45:07.208401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.433 [2024-12-14 22:45:07.208410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.433 [2024-12-14 22:45:07.208417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.433 [2024-12-14 22:45:07.208426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.433 [2024-12-14 22:45:07.220564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.433 [2024-12-14 22:45:07.220865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.433 [2024-12-14 22:45:07.220883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.433 [2024-12-14 22:45:07.220891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.433 [2024-12-14 22:45:07.221067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.433 [2024-12-14 22:45:07.221236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.433 [2024-12-14 22:45:07.221245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.433 [2024-12-14 22:45:07.221251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.433 [2024-12-14 22:45:07.221258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.433 [2024-12-14 22:45:07.233517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.433 [2024-12-14 22:45:07.233865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.433 [2024-12-14 22:45:07.233881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.433 [2024-12-14 22:45:07.233889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.234085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.234259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.234268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.234274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.234280] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.246443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.434 [2024-12-14 22:45:07.246808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.434 [2024-12-14 22:45:07.246828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.434 [2024-12-14 22:45:07.246835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.247009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.247177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.247185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.247191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.247198] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.259291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.434 [2024-12-14 22:45:07.259618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.434 [2024-12-14 22:45:07.259635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.434 [2024-12-14 22:45:07.259643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.259811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.259984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.259993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.259999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.260005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.272193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.434 [2024-12-14 22:45:07.272489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.434 [2024-12-14 22:45:07.272505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.434 [2024-12-14 22:45:07.272512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.272680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.272848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.272856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.272862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.272868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.285082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.434 [2024-12-14 22:45:07.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.434 [2024-12-14 22:45:07.285395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.434 [2024-12-14 22:45:07.285402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.285573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.285742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.285751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.285759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.285765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.297906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.434 [2024-12-14 22:45:07.298249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.434 [2024-12-14 22:45:07.298264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.434 [2024-12-14 22:45:07.298271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.434 [2024-12-14 22:45:07.298439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.434 [2024-12-14 22:45:07.298606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.434 [2024-12-14 22:45:07.298614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.434 [2024-12-14 22:45:07.298620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.434 [2024-12-14 22:45:07.298627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.434 [2024-12-14 22:45:07.310956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.311349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.311393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.311416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.312034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.695 [2024-12-14 22:45:07.312209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.695 [2024-12-14 22:45:07.312219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.695 [2024-12-14 22:45:07.312225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.695 [2024-12-14 22:45:07.312231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.695 [2024-12-14 22:45:07.323964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.324395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.324402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.324571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.695 [2024-12-14 22:45:07.324739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.695 [2024-12-14 22:45:07.324748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.695 [2024-12-14 22:45:07.324758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.695 [2024-12-14 22:45:07.324764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.695 [2024-12-14 22:45:07.336802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.337106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.337124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.337132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.337300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.695 [2024-12-14 22:45:07.337468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.695 [2024-12-14 22:45:07.337477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.695 [2024-12-14 22:45:07.337483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.695 [2024-12-14 22:45:07.337489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.695 [2024-12-14 22:45:07.349796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.350266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.350283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.350291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.350464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.695 [2024-12-14 22:45:07.350638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.695 [2024-12-14 22:45:07.350646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.695 [2024-12-14 22:45:07.350652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.695 [2024-12-14 22:45:07.350659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.695 [2024-12-14 22:45:07.362806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.363151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.363167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.363174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.363343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.695 [2024-12-14 22:45:07.363510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.695 [2024-12-14 22:45:07.363518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.695 [2024-12-14 22:45:07.363524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.695 [2024-12-14 22:45:07.363531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.695 [2024-12-14 22:45:07.375681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.695 [2024-12-14 22:45:07.376140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-12-14 22:45:07.376185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.695 [2024-12-14 22:45:07.376208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.695 [2024-12-14 22:45:07.376793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.377375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.377385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.377391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.377398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.388513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.388878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.388896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.388912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.389081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.389248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.389257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.389263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.389269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.401408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.401771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.401816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.401839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.402368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.402537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.402545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.402551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.402557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.414346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.414696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.414703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.414871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.415064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.415073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.415079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.415086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.427300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.427650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.427666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.427674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.427842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.428017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.428026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.428032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.428039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.440187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.440539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.440555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.440563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.440737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.440946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.440956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.440962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.440969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.453237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.453540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.453556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.453564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.453741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.453925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.453935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.453942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.453949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.466157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.466439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.466458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.466466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.466636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.466804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.466813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.466819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.466825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.479161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.479580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.479646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.479671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.480193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.480376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.480385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.480391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.480397] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.492155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.492535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.492559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.492732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.492913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.492923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.696 [2024-12-14 22:45:07.492932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.696 [2024-12-14 22:45:07.492939] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.696 [2024-12-14 22:45:07.505166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.696 [2024-12-14 22:45:07.505444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-12-14 22:45:07.505460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.696 [2024-12-14 22:45:07.505468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.696 [2024-12-14 22:45:07.505640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.696 [2024-12-14 22:45:07.505813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.696 [2024-12-14 22:45:07.505821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.505827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.505833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.697 [2024-12-14 22:45:07.518199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.697 [2024-12-14 22:45:07.518556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-12-14 22:45:07.518573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.697 [2024-12-14 22:45:07.518581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.697 [2024-12-14 22:45:07.518775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.697 [2024-12-14 22:45:07.518973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.697 [2024-12-14 22:45:07.518983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.518990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.518997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.697 [2024-12-14 22:45:07.531422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.697 [2024-12-14 22:45:07.531819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-12-14 22:45:07.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.697 [2024-12-14 22:45:07.531844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.697 [2024-12-14 22:45:07.532036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.697 [2024-12-14 22:45:07.532220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.697 [2024-12-14 22:45:07.532229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.532236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.532243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.697 [2024-12-14 22:45:07.544630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.697 [2024-12-14 22:45:07.545083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-12-14 22:45:07.545115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.697 [2024-12-14 22:45:07.545124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.697 [2024-12-14 22:45:07.545308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.697 [2024-12-14 22:45:07.545492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.697 [2024-12-14 22:45:07.545500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.545507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.545514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.697 [2024-12-14 22:45:07.557811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.697 [2024-12-14 22:45:07.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-12-14 22:45:07.558212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.697 [2024-12-14 22:45:07.558221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.697 [2024-12-14 22:45:07.558405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.697 [2024-12-14 22:45:07.558589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.697 [2024-12-14 22:45:07.558598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.558605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.558612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.697 [2024-12-14 22:45:07.570981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.697 [2024-12-14 22:45:07.571335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-12-14 22:45:07.571353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.697 [2024-12-14 22:45:07.571360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.697 [2024-12-14 22:45:07.571544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.697 [2024-12-14 22:45:07.571728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.697 [2024-12-14 22:45:07.571737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.697 [2024-12-14 22:45:07.571744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.697 [2024-12-14 22:45:07.571751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 [2024-12-14 22:45:07.584148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.584515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.584536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.584544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.584728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.584921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.958 [2024-12-14 22:45:07.584930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.958 [2024-12-14 22:45:07.584937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.958 [2024-12-14 22:45:07.584944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 [2024-12-14 22:45:07.597402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.597753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.597771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.597779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.597971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.598156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.958 [2024-12-14 22:45:07.598165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.958 [2024-12-14 22:45:07.598174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.958 [2024-12-14 22:45:07.598181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 [2024-12-14 22:45:07.610588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.610958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.610976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.610984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.611179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.611352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.958 [2024-12-14 22:45:07.611361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.958 [2024-12-14 22:45:07.611368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.958 [2024-12-14 22:45:07.611375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 [2024-12-14 22:45:07.623857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.624218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.624235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.624243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.624431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.624618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.958 [2024-12-14 22:45:07.624628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.958 [2024-12-14 22:45:07.624634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.958 [2024-12-14 22:45:07.624641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 [2024-12-14 22:45:07.637328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.637746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.637765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.637774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.637977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.638175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.958 [2024-12-14 22:45:07.638186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.958 [2024-12-14 22:45:07.638193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.958 [2024-12-14 22:45:07.638200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.958 9710.67 IOPS, 37.93 MiB/s [2024-12-14T21:45:07.842Z] [2024-12-14 22:45:07.650721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.958 [2024-12-14 22:45:07.651048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.958 [2024-12-14 22:45:07.651067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.958 [2024-12-14 22:45:07.651075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.958 [2024-12-14 22:45:07.651271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.958 [2024-12-14 22:45:07.651469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.651478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.651486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.651493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.664225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.664651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.664669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.664678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.664874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.665081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.665094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.665102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.665110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.677250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.677654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.677672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.677679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.677853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.678037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.678046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.678052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.678058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.690292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.690674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.690700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.690886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.691080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.691089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.691097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.691104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.703450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.703887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.703921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.704106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.704293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.704302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.704310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.704317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.716727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.717127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.717145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.717153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.717337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.717520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.717529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.717535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.717542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.729738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.730168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.730185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.730193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.730366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.730540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.730550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.730557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.730564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.742963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.743375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.743393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.743401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.743585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.743769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.743778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.743785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.743791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.756124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.756410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.756431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.756439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.756624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.756809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.756818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.756824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.756831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.769305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.769745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.769762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.959 [2024-12-14 22:45:07.769769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.959 [2024-12-14 22:45:07.769959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.959 [2024-12-14 22:45:07.770144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.959 [2024-12-14 22:45:07.770153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.959 [2024-12-14 22:45:07.770160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.959 [2024-12-14 22:45:07.770167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.959 [2024-12-14 22:45:07.782447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.959 [2024-12-14 22:45:07.782825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.959 [2024-12-14 22:45:07.782844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.960 [2024-12-14 22:45:07.782852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.960 [2024-12-14 22:45:07.783045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.960 [2024-12-14 22:45:07.783230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.960 [2024-12-14 22:45:07.783240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.960 [2024-12-14 22:45:07.783247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.960 [2024-12-14 22:45:07.783254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.960 [2024-12-14 22:45:07.795628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.960 [2024-12-14 22:45:07.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.960 [2024-12-14 22:45:07.795988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.960 [2024-12-14 22:45:07.795996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.960 [2024-12-14 22:45:07.796186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.960 [2024-12-14 22:45:07.796371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.960 [2024-12-14 22:45:07.796382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.960 [2024-12-14 22:45:07.796389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.960 [2024-12-14 22:45:07.796396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.960 [2024-12-14 22:45:07.808701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.960 [2024-12-14 22:45:07.809017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.960 [2024-12-14 22:45:07.809034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.960 [2024-12-14 22:45:07.809042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.960 [2024-12-14 22:45:07.809227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.960 [2024-12-14 22:45:07.809413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.960 [2024-12-14 22:45:07.809422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.960 [2024-12-14 22:45:07.809429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.960 [2024-12-14 22:45:07.809436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.960 [2024-12-14 22:45:07.822065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.960 [2024-12-14 22:45:07.822375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.960 [2024-12-14 22:45:07.822393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.960 [2024-12-14 22:45:07.822403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.960 [2024-12-14 22:45:07.822598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.960 [2024-12-14 22:45:07.822795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.960 [2024-12-14 22:45:07.822805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.960 [2024-12-14 22:45:07.822812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.960 [2024-12-14 22:45:07.822819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.960 [2024-12-14 22:45:07.835237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.960 [2024-12-14 22:45:07.835699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.960 [2024-12-14 22:45:07.835716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:46.960 [2024-12-14 22:45:07.835724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:46.960 [2024-12-14 22:45:07.835929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:46.960 [2024-12-14 22:45:07.836125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.960 [2024-12-14 22:45:07.836139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.960 [2024-12-14 22:45:07.836146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.960 [2024-12-14 22:45:07.836154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.848472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.848815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.848832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.221 [2024-12-14 22:45:07.848840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.221 [2024-12-14 22:45:07.849031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.221 [2024-12-14 22:45:07.849215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.221 [2024-12-14 22:45:07.849223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.221 [2024-12-14 22:45:07.849230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.221 [2024-12-14 22:45:07.849237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.861470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.861853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.861869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.221 [2024-12-14 22:45:07.861876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.221 [2024-12-14 22:45:07.862057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.221 [2024-12-14 22:45:07.862230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.221 [2024-12-14 22:45:07.862238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.221 [2024-12-14 22:45:07.862244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.221 [2024-12-14 22:45:07.862251] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.874476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.874808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.874825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.221 [2024-12-14 22:45:07.874832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.221 [2024-12-14 22:45:07.875011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.221 [2024-12-14 22:45:07.875184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.221 [2024-12-14 22:45:07.875193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.221 [2024-12-14 22:45:07.875200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.221 [2024-12-14 22:45:07.875206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.887433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.887874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.887930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.221 [2024-12-14 22:45:07.887956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.221 [2024-12-14 22:45:07.888425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.221 [2024-12-14 22:45:07.888593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.221 [2024-12-14 22:45:07.888601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.221 [2024-12-14 22:45:07.888607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.221 [2024-12-14 22:45:07.888614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.900291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.900637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.900654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.221 [2024-12-14 22:45:07.900661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.221 [2024-12-14 22:45:07.900829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.221 [2024-12-14 22:45:07.901007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.221 [2024-12-14 22:45:07.901017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.221 [2024-12-14 22:45:07.901023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.221 [2024-12-14 22:45:07.901030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.221 [2024-12-14 22:45:07.913057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.221 [2024-12-14 22:45:07.913468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.221 [2024-12-14 22:45:07.913484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.913491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.913660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.913828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.913836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.913842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.913848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.925976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.926318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.926371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.926394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.926995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.927582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.927606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.927628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.927635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.938979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.939419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.939463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.939486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.939888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.940061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.940070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.940076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.940082] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.951891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.952338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.952354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.952361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.952534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.952706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.952715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.952721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.952727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.964909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.965281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.965297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.965304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.965481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.965653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.965662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.965668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.965675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.977849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.978206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.978222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.978229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.978398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.978565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.978573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.978579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.978585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:07.990699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:07.990982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:07.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:07.991005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:07.991178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:07.991351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:07.991359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:07.991365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:07.991371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:08.003508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:08.003873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:08.003889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:08.003896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:08.004076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:08.004256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:08.004267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:08.004273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:08.004279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:08.016458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:08.016898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:08.016958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:08.016981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:08.017564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:08.018026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:08.018035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:08.018041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:08.018047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:08.029408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:08.029779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:08.029794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:08.029801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:08.029976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.222 [2024-12-14 22:45:08.030144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.222 [2024-12-14 22:45:08.030152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.222 [2024-12-14 22:45:08.030158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.222 [2024-12-14 22:45:08.030164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.222 [2024-12-14 22:45:08.042332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.222 [2024-12-14 22:45:08.042772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.222 [2024-12-14 22:45:08.042816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.222 [2024-12-14 22:45:08.042840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.222 [2024-12-14 22:45:08.043437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.223 [2024-12-14 22:45:08.043916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.223 [2024-12-14 22:45:08.043924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.223 [2024-12-14 22:45:08.043930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.223 [2024-12-14 22:45:08.043936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.223 [2024-12-14 22:45:08.055119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.223 [2024-12-14 22:45:08.055478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.223 [2024-12-14 22:45:08.055494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.223 [2024-12-14 22:45:08.055501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.223 [2024-12-14 22:45:08.055669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.223 [2024-12-14 22:45:08.055837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.223 [2024-12-14 22:45:08.055845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.223 [2024-12-14 22:45:08.055851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.223 [2024-12-14 22:45:08.055857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.223 [2024-12-14 22:45:08.067968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.223 [2024-12-14 22:45:08.068310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.223 [2024-12-14 22:45:08.068326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.223 [2024-12-14 22:45:08.068333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.223 [2024-12-14 22:45:08.068501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.223 [2024-12-14 22:45:08.068668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.223 [2024-12-14 22:45:08.068676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.223 [2024-12-14 22:45:08.068682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.223 [2024-12-14 22:45:08.068688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.223 [2024-12-14 22:45:08.080837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.223 [2024-12-14 22:45:08.081187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.223 [2024-12-14 22:45:08.081203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.223 [2024-12-14 22:45:08.081210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.223 [2024-12-14 22:45:08.081378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.223 [2024-12-14 22:45:08.081546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.223 [2024-12-14 22:45:08.081554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.223 [2024-12-14 22:45:08.081560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.223 [2024-12-14 22:45:08.081566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.223 [2024-12-14 22:45:08.093734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.223 [2024-12-14 22:45:08.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.223 [2024-12-14 22:45:08.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.223 [2024-12-14 22:45:08.094115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.223 [2024-12-14 22:45:08.094283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.223 [2024-12-14 22:45:08.094450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.223 [2024-12-14 22:45:08.094459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.223 [2024-12-14 22:45:08.094465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.223 [2024-12-14 22:45:08.094471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.484 [2024-12-14 22:45:08.106707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.484 [2024-12-14 22:45:08.106979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.484 [2024-12-14 22:45:08.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.484 [2024-12-14 22:45:08.107002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.484 [2024-12-14 22:45:08.107170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.484 [2024-12-14 22:45:08.107338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.484 [2024-12-14 22:45:08.107346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.484 [2024-12-14 22:45:08.107352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.484 [2024-12-14 22:45:08.107358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.484 [2024-12-14 22:45:08.119640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.484 [2024-12-14 22:45:08.119989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.484 [2024-12-14 22:45:08.120005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.484 [2024-12-14 22:45:08.120012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.484 [2024-12-14 22:45:08.120180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.484 [2024-12-14 22:45:08.120348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.484 [2024-12-14 22:45:08.120356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.484 [2024-12-14 22:45:08.120362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.484 [2024-12-14 22:45:08.120368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.132569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.132899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.132920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.132927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.133099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.133267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.133275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.133281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.133287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.145424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.146164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.146188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.146197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.146373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.146543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.146551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.146557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.146563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.158234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.158718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.158763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.158787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.159202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.159372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.159380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.159386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.159392] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.171035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.171467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.171483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.171490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.171659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.171826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.171838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.171844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.171850] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.183767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.184196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.184241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.184265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.184849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.185349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.185357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.185363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.185370] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.196646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.197063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.197080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.197087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.197256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.197423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.197432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.197438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.197444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.209603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.209963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.209981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.209988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.210161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.210334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.210342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.210349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.210355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.222570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.222989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.222996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.223180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.223348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.223356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.223362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.223368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.235599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.235956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.235973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.235981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.236155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.236327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.236336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.236342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.236348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.248515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.248931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.248977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.485 [2024-12-14 22:45:08.249000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.485 [2024-12-14 22:45:08.249414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.485 [2024-12-14 22:45:08.249582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.485 [2024-12-14 22:45:08.249590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.485 [2024-12-14 22:45:08.249596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.485 [2024-12-14 22:45:08.249602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.485 [2024-12-14 22:45:08.261298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.485 [2024-12-14 22:45:08.261710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.485 [2024-12-14 22:45:08.261762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.261786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.262214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.262382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.262390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.262396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.262402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.274192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.274608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.274651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.274674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.275168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.275338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.275346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.275353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.275359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.287121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.287523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.287540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.287548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.287716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.287883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.287891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.287897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.287912] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.299895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.300311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.300335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.300506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.300674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.300682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.300688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.300694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.312785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.313206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.313223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.313230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.313398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.313566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.313574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.313580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.313586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.325669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.326039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.326056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.326063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.326238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.326407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.326415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.326421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.326427] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.338505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.338868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.338883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.338890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.339080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.339248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.339259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.339265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.339271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.351321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.351679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.351695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.351703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.351871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.352046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.352055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.352061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.352067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.486 [2024-12-14 22:45:08.364236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.486 [2024-12-14 22:45:08.364595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.486 [2024-12-14 22:45:08.364611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.486 [2024-12-14 22:45:08.364619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.486 [2024-12-14 22:45:08.364787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.486 [2024-12-14 22:45:08.364963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.486 [2024-12-14 22:45:08.364971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.486 [2024-12-14 22:45:08.364977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.486 [2024-12-14 22:45:08.364983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.747 [2024-12-14 22:45:08.377105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.747 [2024-12-14 22:45:08.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.747 [2024-12-14 22:45:08.377556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.747 [2024-12-14 22:45:08.377563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.747 [2024-12-14 22:45:08.377731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.747 [2024-12-14 22:45:08.377899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.747 [2024-12-14 22:45:08.377913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.747 [2024-12-14 22:45:08.377919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.747 [2024-12-14 22:45:08.377926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.747 [2024-12-14 22:45:08.389870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.747 [2024-12-14 22:45:08.390264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.747 [2024-12-14 22:45:08.390281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.747 [2024-12-14 22:45:08.390288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.747 [2024-12-14 22:45:08.390456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.747 [2024-12-14 22:45:08.390624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.747 [2024-12-14 22:45:08.390632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.747 [2024-12-14 22:45:08.390638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.747 [2024-12-14 22:45:08.390644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.747 [2024-12-14 22:45:08.402779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.747 [2024-12-14 22:45:08.403126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.747 [2024-12-14 22:45:08.403171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.747 [2024-12-14 22:45:08.403194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.403680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.403848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.403856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.403863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.403869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.415605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.415954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.415971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.415979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.416147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.416314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.416322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.416328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.416334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.428455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.428914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.428967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.428991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.429576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.429971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.429980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.429986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.429992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.441429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.441852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.441933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.442463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.442854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.442871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.442884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.442898] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.456309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.456822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.456843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.456853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.457115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.457371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.457383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.457392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.457402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.469339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.469679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.469696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.469703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.469879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.470063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.470072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.470078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.470084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.482265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.482715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.482759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.482783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.483380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.483925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.483934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.483940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.483947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.495221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.495579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.495595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.495603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.495776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.495956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.495965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.495971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.495977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.508150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.508507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.508523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.508530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.508698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.508864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.508873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.508882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.508889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.521047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.521402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.521418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.521425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.521593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.521760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.521768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.748 [2024-12-14 22:45:08.521775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.748 [2024-12-14 22:45:08.521781] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.748 [2024-12-14 22:45:08.533944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.748 [2024-12-14 22:45:08.534356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.748 [2024-12-14 22:45:08.534372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.748 [2024-12-14 22:45:08.534379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.748 [2024-12-14 22:45:08.534548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.748 [2024-12-14 22:45:08.534715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.748 [2024-12-14 22:45:08.534723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.534729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.534736] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.546789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.547235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.547279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.547302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.547887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.548281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.548289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.548295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.548301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.559606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.559994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.560011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.560017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.560177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.560335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.560343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.560349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.560354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.572440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.572875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.572899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.573497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.573968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.573986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.574000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.574013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.587355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.587879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.587933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.587958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.588404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.588659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.588670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.588679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.588689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.600278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.600711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.600764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.600788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.601277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.601446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.601453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.601459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.601466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.613196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.613541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.613557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.613564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.613732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.613899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.613913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.613919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.613926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.749 [2024-12-14 22:45:08.626127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.749 [2024-12-14 22:45:08.626494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.749 [2024-12-14 22:45:08.626538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:47.749 [2024-12-14 22:45:08.626561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:47.749 [2024-12-14 22:45:08.627112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:47.749 [2024-12-14 22:45:08.627286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.749 [2024-12-14 22:45:08.627295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.749 [2024-12-14 22:45:08.627301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.749 [2024-12-14 22:45:08.627307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.639050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.639466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.639482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.639490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.639662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.639830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.639840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.639848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.639854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 7283.00 IOPS, 28.45 MiB/s [2024-12-14T21:45:08.895Z] [2024-12-14 22:45:08.652004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.652438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.652455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.652462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.652636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.652794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.652802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.652808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.652814] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.664815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.665260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.665276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.665283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.665451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.665619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.665627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.665633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.665639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.677630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.678012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.678058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.678082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.678576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.678744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.678755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.678762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.678769] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.690402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.690765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.690781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.690787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.690968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.691136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.691144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.691150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.691156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.703321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.703707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.703723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.703730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.703898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.704071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.704079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.704085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.704091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.716194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.716570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.716586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.716593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.716762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.716952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.716961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.716968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.716974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.729011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.729437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.729454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.729461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.729629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.729797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.729804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.729810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.729817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.011 [2024-12-14 22:45:08.742001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.011 [2024-12-14 22:45:08.742408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.011 [2024-12-14 22:45:08.742424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.011 [2024-12-14 22:45:08.742431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.011 [2024-12-14 22:45:08.742604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.011 [2024-12-14 22:45:08.742776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.011 [2024-12-14 22:45:08.742785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.011 [2024-12-14 22:45:08.742791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.011 [2024-12-14 22:45:08.742797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.754988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.755419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.755464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.755487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.756086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.756625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.756633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.756639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.756646] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.767736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.768141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.768202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.768226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.768698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.768866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.768874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.768880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.768886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.780807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.781230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.781277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.781301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.781764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.781939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.781948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.781955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.781962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.793715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.794114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.794130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.794138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.794306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.794474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.794482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.794488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.794494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.806713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.807162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.807180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.807187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.807359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.807527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.807535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.807541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.807547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.819788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.820138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.820155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.820162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.820330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.820498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.820507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.820513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.820519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.832616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.832964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.832981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.832988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.833148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.833307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.833315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.833322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.833328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.845607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.846005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.846022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.846029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.846197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.846365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.846376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.846382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.846388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.858609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.858948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.858965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.858973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.859146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.859318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.859326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.859332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.859339] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.871653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.871998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.872016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.012 [2024-12-14 22:45:08.872024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.012 [2024-12-14 22:45:08.872197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.012 [2024-12-14 22:45:08.872370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.012 [2024-12-14 22:45:08.872378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.012 [2024-12-14 22:45:08.872385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.012 [2024-12-14 22:45:08.872392] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.012 [2024-12-14 22:45:08.884772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.012 [2024-12-14 22:45:08.885057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.012 [2024-12-14 22:45:08.885075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.013 [2024-12-14 22:45:08.885082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.013 [2024-12-14 22:45:08.885256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.013 [2024-12-14 22:45:08.885428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.013 [2024-12-14 22:45:08.885437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.013 [2024-12-14 22:45:08.885443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.013 [2024-12-14 22:45:08.885449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.274 [2024-12-14 22:45:08.897842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.274 [2024-12-14 22:45:08.898201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.274 [2024-12-14 22:45:08.898218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.274 [2024-12-14 22:45:08.898225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.274 [2024-12-14 22:45:08.898398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.274 [2024-12-14 22:45:08.898570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.274 [2024-12-14 22:45:08.898579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.274 [2024-12-14 22:45:08.898585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.274 [2024-12-14 22:45:08.898591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.274 [2024-12-14 22:45:08.910880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.274 [2024-12-14 22:45:08.911283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.274 [2024-12-14 22:45:08.911300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.274 [2024-12-14 22:45:08.911307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.274 [2024-12-14 22:45:08.911480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.274 [2024-12-14 22:45:08.911654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.274 [2024-12-14 22:45:08.911662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.274 [2024-12-14 22:45:08.911669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.274 [2024-12-14 22:45:08.911675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.274 [2024-12-14 22:45:08.924036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.274 [2024-12-14 22:45:08.924449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.274 [2024-12-14 22:45:08.924466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.274 [2024-12-14 22:45:08.924473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.274 [2024-12-14 22:45:08.924657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.274 [2024-12-14 22:45:08.924841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.274 [2024-12-14 22:45:08.924849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.274 [2024-12-14 22:45:08.924856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.274 [2024-12-14 22:45:08.924863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.274 [2024-12-14 22:45:08.937141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.274 [2024-12-14 22:45:08.937522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.274 [2024-12-14 22:45:08.937542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.274 [2024-12-14 22:45:08.937549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.274 [2024-12-14 22:45:08.937722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.274 [2024-12-14 22:45:08.937894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.274 [2024-12-14 22:45:08.937909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.274 [2024-12-14 22:45:08.937916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.274 [2024-12-14 22:45:08.937923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.274 [2024-12-14 22:45:08.950265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.274 [2024-12-14 22:45:08.950654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.274 [2024-12-14 22:45:08.950672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:08.950679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:08.950848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:08.951026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:08.951035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:08.951040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:08.951047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:08.963092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:08.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:08.963454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:08.963461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:08.963620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:08.963778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:08.963786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:08.963792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:08.963798] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:08.975972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:08.976272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:08.976288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:08.976295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:08.976466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:08.976635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:08.976642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:08.976648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:08.976655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:08.988883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:08.989226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:08.989243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:08.989251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:08.989423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:08.989597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:08.989605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:08.989611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:08.989617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.001816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.002108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.002125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.002132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.002300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.002468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:09.002476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:09.002482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:09.002488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.014623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.014992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.015009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.015016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.015184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.015352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:09.015363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:09.015369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:09.015375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.027430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.027853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.027869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.027876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.028050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.028218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:09.028226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:09.028232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:09.028238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.040412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.040825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.040841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.040848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.041023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.041191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:09.041199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:09.041205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:09.041211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.053300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.053723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.053740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.053747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.053922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.054091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.275 [2024-12-14 22:45:09.054099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.275 [2024-12-14 22:45:09.054105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.275 [2024-12-14 22:45:09.054111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.275 [2024-12-14 22:45:09.066159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.275 [2024-12-14 22:45:09.066611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.275 [2024-12-14 22:45:09.066627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.275 [2024-12-14 22:45:09.066635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.275 [2024-12-14 22:45:09.066802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.275 [2024-12-14 22:45:09.066977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.066986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.066992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.066998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.079091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.079444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.079460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.079467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.079634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.079802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.079809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.079816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.079822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.091848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.092143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.092160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.092167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.092335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.092502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.092510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.092517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.092523] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.104679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.105134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.105142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.105315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.105488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.105497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.105503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.105509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.117558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.118010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.118056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.118080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.118516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.118685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.118693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.118699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.118705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.132735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.133221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.133243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.133253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.133508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.133763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.133774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.133783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.133792] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.276 [2024-12-14 22:45:09.145655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.276 [2024-12-14 22:45:09.145999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.276 [2024-12-14 22:45:09.146017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.276 [2024-12-14 22:45:09.146024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.276 [2024-12-14 22:45:09.146195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.276 [2024-12-14 22:45:09.146363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.276 [2024-12-14 22:45:09.146371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.276 [2024-12-14 22:45:09.146377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.276 [2024-12-14 22:45:09.146383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.537 [2024-12-14 22:45:09.158753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.537 [2024-12-14 22:45:09.159038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.537 [2024-12-14 22:45:09.159054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.537 [2024-12-14 22:45:09.159061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.537 [2024-12-14 22:45:09.159235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.537 [2024-12-14 22:45:09.159407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.537 [2024-12-14 22:45:09.159416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.537 [2024-12-14 22:45:09.159422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.537 [2024-12-14 22:45:09.159428] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.537 [2024-12-14 22:45:09.171914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.537 [2024-12-14 22:45:09.172299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.537 [2024-12-14 22:45:09.172316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.537 [2024-12-14 22:45:09.172324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.537 [2024-12-14 22:45:09.172508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.537 [2024-12-14 22:45:09.172690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.537 [2024-12-14 22:45:09.172699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.537 [2024-12-14 22:45:09.172705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.537 [2024-12-14 22:45:09.172712] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.537 [2024-12-14 22:45:09.185102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.537 [2024-12-14 22:45:09.185514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.537 [2024-12-14 22:45:09.185532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.537 [2024-12-14 22:45:09.185539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.537 [2024-12-14 22:45:09.185723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.537 [2024-12-14 22:45:09.185916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.537 [2024-12-14 22:45:09.185928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.537 [2024-12-14 22:45:09.185952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.537 [2024-12-14 22:45:09.185960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.537 [2024-12-14 22:45:09.198134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.537 [2024-12-14 22:45:09.198498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.537 [2024-12-14 22:45:09.198514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.537 [2024-12-14 22:45:09.198521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.537 [2024-12-14 22:45:09.198694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.198866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.198875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.198881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.198887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.211350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.211790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.211807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.211815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.212006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.212191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.212199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.212206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.212213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.224627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.225071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.225088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.225096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.225281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.225465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.225474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.225480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.225487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.237882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.238332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.238350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.238358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.238542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.238727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.238735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.238742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.238749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.251043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.251495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.251520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.251705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.251888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.251897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.251911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.251919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.264293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.264736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.264753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.264761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.264952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.265136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.265145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.265152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.265158] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.277542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.277997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.278017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.278025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.278209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.278393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.278402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.278408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.278415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.290638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.290996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.291013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.291021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.291194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.291367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.291376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.291382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.291388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.303764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.304119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.304135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.304143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.304316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.304488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.304497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.304503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.304509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.316734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.317160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.317177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.317184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.317355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.538 [2024-12-14 22:45:09.317523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.538 [2024-12-14 22:45:09.317531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.538 [2024-12-14 22:45:09.317537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.538 [2024-12-14 22:45:09.317543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.538 [2024-12-14 22:45:09.329507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.538 [2024-12-14 22:45:09.329923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.538 [2024-12-14 22:45:09.329940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.538 [2024-12-14 22:45:09.329947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.538 [2024-12-14 22:45:09.330106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.330270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.330279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.330285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.330290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.342284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.342700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.342716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.342723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.342891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.343084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.343093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.343099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.343105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.355101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.355492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.355565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.356080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.356249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.356259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.356266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.356272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.367951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.368368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.368384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.368391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.368550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.368709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.368717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.368723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.368729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.380714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.381131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.381148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.381156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.381323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.381491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.381499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.381505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.381511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.393459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.393872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.393888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.393895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.394082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.394250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.394257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.394264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.394270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.406247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.406689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.406705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.406712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.406871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.539 [2024-12-14 22:45:09.407056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.539 [2024-12-14 22:45:09.407065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.539 [2024-12-14 22:45:09.407071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.539 [2024-12-14 22:45:09.407078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.539 [2024-12-14 22:45:09.419217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.539 [2024-12-14 22:45:09.419639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.539 [2024-12-14 22:45:09.419655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.539 [2024-12-14 22:45:09.419662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.539 [2024-12-14 22:45:09.419830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.800 [2024-12-14 22:45:09.420021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.800 [2024-12-14 22:45:09.420031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.800 [2024-12-14 22:45:09.420041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.800 [2024-12-14 22:45:09.420049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.800 [2024-12-14 22:45:09.431999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.800 [2024-12-14 22:45:09.432443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.800 [2024-12-14 22:45:09.432488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.800 [2024-12-14 22:45:09.432511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.800 [2024-12-14 22:45:09.432987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.800 [2024-12-14 22:45:09.433156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.800 [2024-12-14 22:45:09.433164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.800 [2024-12-14 22:45:09.433170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.800 [2024-12-14 22:45:09.433176] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.800 [2024-12-14 22:45:09.444883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.800 [2024-12-14 22:45:09.445297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.800 [2024-12-14 22:45:09.445317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.800 [2024-12-14 22:45:09.445324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.800 [2024-12-14 22:45:09.445483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.800 [2024-12-14 22:45:09.445641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.800 [2024-12-14 22:45:09.445649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.800 [2024-12-14 22:45:09.445655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.800 [2024-12-14 22:45:09.445661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.800 [2024-12-14 22:45:09.457706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.800 [2024-12-14 22:45:09.458127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.800 [2024-12-14 22:45:09.458143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.800 [2024-12-14 22:45:09.458150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.458318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.458485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.458493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.458499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.458505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.470510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.470962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.471008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.471032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.471443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.471602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.471609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.471615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.471621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.483316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.483753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.483769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.483776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.483954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.484122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.484130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.484136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.484143] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.496117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.496542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.496587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.496610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.497210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.497590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.497598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.497604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.497610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.508994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.509425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.509441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.509448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.509622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.509795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.509803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.509809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.509816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.521894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.522311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.522327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.522334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.522502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.522670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.522681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.522688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.522694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.534632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.535050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.535097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.535122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.535664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.535823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.535831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.535837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.535843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.547458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.547849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.547865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.547872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.548059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.548228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.548236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.548242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.548248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.560234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.560641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.560657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.560664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.560824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.561008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.561016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.561023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.561029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.573014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.573429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.573445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.573451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.801 [2024-12-14 22:45:09.573610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.801 [2024-12-14 22:45:09.573768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.801 [2024-12-14 22:45:09.573776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.801 [2024-12-14 22:45:09.573782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.801 [2024-12-14 22:45:09.573788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.801 [2024-12-14 22:45:09.585839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.801 [2024-12-14 22:45:09.586276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.801 [2024-12-14 22:45:09.586292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.801 [2024-12-14 22:45:09.586299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.586468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.586635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.586643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.586649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.586655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.598591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.599002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.599018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.599025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.599185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.599344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.599351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.599357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.599363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.611414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.611824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.611842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.611849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.612033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.612201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.612209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.612215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.612221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.624273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.624690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.624705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.624712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.624871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.625063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.625072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.625078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.625084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.637069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.637522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.637566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.637589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.638007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.638175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.638183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.638189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.638195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 5826.40 IOPS, 22.76 MiB/s [2024-12-14T21:45:09.686Z] [2024-12-14 22:45:09.650963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.651379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.651395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.651402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.651565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.651723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.651731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.651736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.651743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.663785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.664209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.664226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.664233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.664401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.664568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.664577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.664583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.664590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.802 [2024-12-14 22:45:09.676724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.802 [2024-12-14 22:45:09.677148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.802 [2024-12-14 22:45:09.677166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:48.802 [2024-12-14 22:45:09.677173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:48.802 [2024-12-14 22:45:09.677369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:48.802 [2024-12-14 22:45:09.677556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.802 [2024-12-14 22:45:09.677564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.802 [2024-12-14 22:45:09.677570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.802 [2024-12-14 22:45:09.677576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.063 [2024-12-14 22:45:09.689803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.063 [2024-12-14 22:45:09.690231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.063 [2024-12-14 22:45:09.690248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.063 [2024-12-14 22:45:09.690256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.063 [2024-12-14 22:45:09.690430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.063 [2024-12-14 22:45:09.690603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.063 [2024-12-14 22:45:09.690614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.063 [2024-12-14 22:45:09.690622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.063 [2024-12-14 22:45:09.690628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.063 [2024-12-14 22:45:09.702849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.063 [2024-12-14 22:45:09.703283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.063 [2024-12-14 22:45:09.703299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.063 [2024-12-14 22:45:09.703307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.063 [2024-12-14 22:45:09.703878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.063 [2024-12-14 22:45:09.704057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.063 [2024-12-14 22:45:09.704066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.063 [2024-12-14 22:45:09.704072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.063 [2024-12-14 22:45:09.704079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.063 [2024-12-14 22:45:09.715861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.063 [2024-12-14 22:45:09.716282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.063 [2024-12-14 22:45:09.716298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.063 [2024-12-14 22:45:09.716305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.063 [2024-12-14 22:45:09.716473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.063 [2024-12-14 22:45:09.716641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.716649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.716655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.716661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.728590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.729003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.729019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.729026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.729185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.729344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.729351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.729357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.729367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.741407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.741820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.741836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.741842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.742027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.742194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.742202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.742208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.742214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.754262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.754682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.754724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.754747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.755345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.755776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.755792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.755807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.755821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.769092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.769623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.769645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.769656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.769918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.770174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.770185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.770194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.770204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.782151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.782589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.782610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.782618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.782791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.782970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.782979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.782986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.782992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.795145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.795571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.795587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.795595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.795763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.795952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.795961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.795967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.795974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.807948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.808369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.808415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.808438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.809010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.809179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.809187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.809193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.809199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.820719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.821155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.821223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.821814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.822078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.822086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.822093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.822099] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.833558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.833982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.834028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.834052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.834636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.064 [2024-12-14 22:45:09.835237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.064 [2024-12-14 22:45:09.835264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.064 [2024-12-14 22:45:09.835284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.064 [2024-12-14 22:45:09.835305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.064 [2024-12-14 22:45:09.846410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.064 [2024-12-14 22:45:09.846822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.064 [2024-12-14 22:45:09.846838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.064 [2024-12-14 22:45:09.846844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.064 [2024-12-14 22:45:09.847029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.847197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.847205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.847211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.847217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.859138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.859551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.859567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.859573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.859732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.859891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.859907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.859914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.859920] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.871999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.872393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.872410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.872417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.872585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.872752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.872760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.872766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.872772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.884947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.885296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.885312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.885319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.885488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.885656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.885664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.885670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.885676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.897882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.898301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.898318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.898325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.898494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.898666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.898674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.898680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.898691] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.910682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.911117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.911162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.911186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.911771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.912370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.912402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.912409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.912415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.923749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.924178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.924195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.924202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.924370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.924538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.924546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.924552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.924559] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.065 [2024-12-14 22:45:09.936738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.065 [2024-12-14 22:45:09.937181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.065 [2024-12-14 22:45:09.937198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.065 [2024-12-14 22:45:09.937205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.065 [2024-12-14 22:45:09.937378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.065 [2024-12-14 22:45:09.937550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.065 [2024-12-14 22:45:09.937559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.065 [2024-12-14 22:45:09.937565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.065 [2024-12-14 22:45:09.937571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.326 [2024-12-14 22:45:09.949614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.326 [2024-12-14 22:45:09.950060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.326 [2024-12-14 22:45:09.950113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.326 [2024-12-14 22:45:09.950137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.326 [2024-12-14 22:45:09.950701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.326 [2024-12-14 22:45:09.950874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.326 [2024-12-14 22:45:09.950882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.326 [2024-12-14 22:45:09.950889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.326 [2024-12-14 22:45:09.950895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.326 [2024-12-14 22:45:09.962398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.326 [2024-12-14 22:45:09.962732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.326 [2024-12-14 22:45:09.962747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:09.962754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:09.962928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:09.963096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:09.963104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:09.963111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:09.963117] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:09.975247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:09.975705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:09.975750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:09.975773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:09.976374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:09.976762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:09.976770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:09.976776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:09.976783] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:09.988014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:09.988348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:09.988364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:09.988371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:09.988533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:09.988692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:09.988700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:09.988706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:09.988711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.000840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.001272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.001290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.001297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.001470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.001643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.001651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.001658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.001664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.014265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.014650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.014666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.014674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.014848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.015025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.015034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.015040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.015046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.027347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.027703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.027719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.027727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.027901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.028080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.028095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.028102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.028109] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.040509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.040899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.041079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.041252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.041261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.041267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.041273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.053533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.053958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.053976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.053983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.054156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.054328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.054337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.054343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.054349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.066538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.066971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.066987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.066995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.067168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.067340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.067348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.067355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.067361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.327 [2024-12-14 22:45:10.080134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.327 [2024-12-14 22:45:10.080589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.327 [2024-12-14 22:45:10.080606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.327 [2024-12-14 22:45:10.080613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.327 [2024-12-14 22:45:10.080787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.327 [2024-12-14 22:45:10.080964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.327 [2024-12-14 22:45:10.080974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.327 [2024-12-14 22:45:10.080980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.327 [2024-12-14 22:45:10.080987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.093043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.093395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.093412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.093419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.093588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.093755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.093763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.093769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.093776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.106015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.106445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.106491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.106514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.106979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.107153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.107162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.107168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.107174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.118914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.119292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.119312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.119320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.119488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.119656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.119664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.119670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.119676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.131915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.132290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.132306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.132313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.132486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.132658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.132666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.132672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.132679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.145005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.145468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.145512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.145536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.146072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.146246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.146254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.146260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.146266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.157919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.158258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.158275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.158282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.158454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.158626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.158634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.158640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.158646] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.170880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.171223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.171239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.171246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 [2024-12-14 22:45:10.171415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.171582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.171590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.171596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.171602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 [2024-12-14 22:45:10.183831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.184262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.184278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.328 [2024-12-14 22:45:10.184285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 535688 Killed "${NVMF_APP[@]}" "$@" 00:35:49.328 [2024-12-14 22:45:10.184458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.328 [2024-12-14 22:45:10.184631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.328 [2024-12-14 22:45:10.184639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.328 [2024-12-14 22:45:10.184646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.328 [2024-12-14 22:45:10.184652] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=537237 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 537237 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 537237 ']' 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.328 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.328 [2024-12-14 22:45:10.196851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.328 [2024-12-14 22:45:10.197280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.328 [2024-12-14 22:45:10.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.329 [2024-12-14 22:45:10.197305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.329 [2024-12-14 22:45:10.197480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.329 [2024-12-14 22:45:10.197652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.329 [2024-12-14 22:45:10.197660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.329 [2024-12-14 22:45:10.197666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.329 [2024-12-14 22:45:10.197672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.209875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.210238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.210254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.210261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.210434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.210607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.210614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.210621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.210627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.222994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.223344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.223360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.223367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.223540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.223717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.223725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.223732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.223738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.235982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.236417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.236434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.236442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.236615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.236788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.236796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.236803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.236809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.242717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:49.590 [2024-12-14 22:45:10.242755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.590 [2024-12-14 22:45:10.249139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.249551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.249567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.249574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.249748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.249930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.249939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.249946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.249953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.262184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.262592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.262609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.262617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.262791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.262973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.262982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.262988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.262995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.275211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.275568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.275591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.275765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.275942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.275951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.275958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.275964] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.288186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.288590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.288607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.288614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.288788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.288966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.288975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.590 [2024-12-14 22:45:10.288982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.590 [2024-12-14 22:45:10.288988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.590 [2024-12-14 22:45:10.301261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.590 [2024-12-14 22:45:10.301629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-12-14 22:45:10.301645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.590 [2024-12-14 22:45:10.301653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.590 [2024-12-14 22:45:10.301827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.590 [2024-12-14 22:45:10.302016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.590 [2024-12-14 22:45:10.302025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.302036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.302043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.314260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.314625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.314641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.314648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.314821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.315003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.315012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.315019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.315025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.322892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:49.591 [2024-12-14 22:45:10.327263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.327615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.327632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.327640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.327814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.327993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.328002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.328009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.328016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.340248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.340599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.340616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.340623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.340797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.340975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.340984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.340991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.340997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.344943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.591 [2024-12-14 22:45:10.344969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.591 [2024-12-14 22:45:10.344976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.591 [2024-12-14 22:45:10.344982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.591 [2024-12-14 22:45:10.344987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.591 [2024-12-14 22:45:10.346166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:49.591 [2024-12-14 22:45:10.346273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.591 [2024-12-14 22:45:10.346275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:49.591 [2024-12-14 22:45:10.353263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.353756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.353945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.354121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.354129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.354136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.354144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.366382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.366773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.366794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.366802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.366983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.367158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.367166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.367173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.367181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.379408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.379730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.379752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.379761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.379942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.380124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.380133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.380140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.380147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.392408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.392796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.392819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.392827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.393008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.393184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.393192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.393199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.393207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.405438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.405839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.405848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.406028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.406203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.406211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.406218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.406226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.591 [2024-12-14 22:45:10.418453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.591 [2024-12-14 22:45:10.418815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-12-14 22:45:10.418833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.591 [2024-12-14 22:45:10.418840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.591 [2024-12-14 22:45:10.419019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.591 [2024-12-14 22:45:10.419194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.591 [2024-12-14 22:45:10.419202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.591 [2024-12-14 22:45:10.419214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.591 [2024-12-14 22:45:10.419221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.592 [2024-12-14 22:45:10.431449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.592 [2024-12-14 22:45:10.431741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-12-14 22:45:10.431758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.592 [2024-12-14 22:45:10.431765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.592 [2024-12-14 22:45:10.431944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.592 [2024-12-14 22:45:10.432118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.592 [2024-12-14 22:45:10.432127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.592 [2024-12-14 22:45:10.432133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.592 [2024-12-14 22:45:10.432139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.592 [2024-12-14 22:45:10.444549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.592 [2024-12-14 22:45:10.444816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-12-14 22:45:10.444833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.592 [2024-12-14 22:45:10.444840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.592 [2024-12-14 22:45:10.445019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.592 [2024-12-14 22:45:10.445194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.592 [2024-12-14 22:45:10.445202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.592 [2024-12-14 22:45:10.445209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.592 [2024-12-14 22:45:10.445215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.592 [2024-12-14 22:45:10.457597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.592 [2024-12-14 22:45:10.457884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-12-14 22:45:10.457900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.592 [2024-12-14 22:45:10.457919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.592 [2024-12-14 22:45:10.458092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.592 [2024-12-14 22:45:10.458264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.592 [2024-12-14 22:45:10.458277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.592 [2024-12-14 22:45:10.458283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.592 [2024-12-14 22:45:10.458290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.592 [2024-12-14 22:45:10.470666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.592 [2024-12-14 22:45:10.470959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-12-14 22:45:10.470978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.592 [2024-12-14 22:45:10.470986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:49.592 [2024-12-14 22:45:10.471159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.592 [2024-12-14 22:45:10.471333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.592 [2024-12-14 22:45:10.471344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.592 [2024-12-14 22:45:10.471350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.592 [2024-12-14 22:45:10.471356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.592 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.852 [2024-12-14 22:45:10.476835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.852 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.852 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:49.852 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.852 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.852 [2024-12-14 22:45:10.483740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.852 [2024-12-14 22:45:10.484014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.852 [2024-12-14 22:45:10.484031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.852 [2024-12-14 22:45:10.484038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.852 [2024-12-14 22:45:10.484212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.852 [2024-12-14 22:45:10.484384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.852 [2024-12-14 22:45:10.484393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.852 [2024-12-14 22:45:10.484399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.852 [2024-12-14 22:45:10.484405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.852 [2024-12-14 22:45:10.496793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.852 [2024-12-14 22:45:10.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.852 [2024-12-14 22:45:10.497112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.852 [2024-12-14 22:45:10.497119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.852 [2024-12-14 22:45:10.497292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.852 [2024-12-14 22:45:10.497464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.852 [2024-12-14 22:45:10.497473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.852 [2024-12-14 22:45:10.497479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.852 [2024-12-14 22:45:10.497485] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.852 [2024-12-14 22:45:10.509858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.852 [2024-12-14 22:45:10.510221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.852 [2024-12-14 22:45:10.510238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.853 [2024-12-14 22:45:10.510245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.853 [2024-12-14 22:45:10.510418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.853 [2024-12-14 22:45:10.510594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.853 [2024-12-14 22:45:10.510603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.853 [2024-12-14 22:45:10.510609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.853 [2024-12-14 22:45:10.510616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.853 Malloc0 00:35:49.853 [2024-12-14 22:45:10.522819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.853 [2024-12-14 22:45:10.523246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.853 [2024-12-14 22:45:10.523263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.853 [2024-12-14 22:45:10.523270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:49.853 [2024-12-14 22:45:10.523444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.853 [2024-12-14 22:45:10.523617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.853 [2024-12-14 22:45:10.523626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.853 [2024-12-14 22:45:10.523632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.853 [2024-12-14 22:45:10.523638] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.853 [2024-12-14 22:45:10.535869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.853 [2024-12-14 22:45:10.536282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.853 [2024-12-14 22:45:10.536299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee0cf0 with addr=10.0.0.2, port=4420 00:35:49.853 [2024-12-14 22:45:10.536306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0cf0 is same with the state(6) to be set 00:35:49.853 [2024-12-14 22:45:10.536479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0cf0 (9): Bad file descriptor 00:35:49.853 [2024-12-14 22:45:10.536652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.853 [2024-12-14 22:45:10.536661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.853 [2024-12-14 22:45:10.536667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.853 [2024-12-14 22:45:10.536673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.853 [2024-12-14 22:45:10.546025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.853 [2024-12-14 22:45:10.548938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.853 22:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 535942 00:35:49.853 4855.33 IOPS, 18.97 MiB/s [2024-12-14T21:45:10.737Z] [2024-12-14 22:45:10.693153] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:52.171 5706.00 IOPS, 22.29 MiB/s [2024-12-14T21:45:13.994Z] 6425.62 IOPS, 25.10 MiB/s [2024-12-14T21:45:14.931Z] 6972.78 IOPS, 27.24 MiB/s [2024-12-14T21:45:15.869Z] 7438.60 IOPS, 29.06 MiB/s [2024-12-14T21:45:16.808Z] 7806.36 IOPS, 30.49 MiB/s [2024-12-14T21:45:17.746Z] 8119.92 IOPS, 31.72 MiB/s [2024-12-14T21:45:18.685Z] 8379.85 IOPS, 32.73 MiB/s [2024-12-14T21:45:20.064Z] 8605.36 IOPS, 33.61 MiB/s [2024-12-14T21:45:20.064Z] 8793.73 IOPS, 34.35 MiB/s 00:35:59.180 Latency(us) 00:35:59.180 [2024-12-14T21:45:20.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:59.180 Verification LBA range: start 0x0 length 0x4000 00:35:59.181 Nvme1n1 : 15.01 8796.57 34.36 11208.90 0.00 6378.42 628.05 18350.08 00:35:59.181 [2024-12-14T21:45:20.065Z] =================================================================================================================== 00:35:59.181 [2024-12-14T21:45:20.065Z] Total : 8796.57 34.36 11208.90 0.00 6378.42 628.05 18350.08 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.181 rmmod nvme_tcp 00:35:59.181 rmmod nvme_fabrics 00:35:59.181 rmmod nvme_keyring 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 537237 ']' 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 537237 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 537237 ']' 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 537237 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 537237 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 537237' 00:35:59.181 killing process with pid 537237 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 537237 00:35:59.181 22:45:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 537237 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:59.440 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.441 22:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.349 22:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:01.349 00:36:01.349 real 0m26.087s 00:36:01.349 user 1m1.161s 00:36:01.349 sys 0m6.671s 00:36:01.349 22:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.349 22:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:01.349 ************************************ 00:36:01.349 END TEST nvmf_bdevperf 00:36:01.349 ************************************ 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.612 ************************************ 00:36:01.612 START TEST nvmf_target_disconnect 00:36:01.612 ************************************ 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:01.612 * Looking for test storage... 00:36:01.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.612 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.612 --rc genhtml_branch_coverage=1 00:36:01.612 --rc genhtml_function_coverage=1 00:36:01.612 --rc genhtml_legend=1 00:36:01.612 --rc geninfo_all_blocks=1 00:36:01.613 --rc geninfo_unexecuted_blocks=1 00:36:01.613 00:36:01.613 ' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.613 --rc genhtml_branch_coverage=1 00:36:01.613 --rc genhtml_function_coverage=1 00:36:01.613 --rc genhtml_legend=1 00:36:01.613 --rc geninfo_all_blocks=1 00:36:01.613 --rc geninfo_unexecuted_blocks=1 00:36:01.613 00:36:01.613 ' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.613 --rc genhtml_branch_coverage=1 00:36:01.613 --rc genhtml_function_coverage=1 00:36:01.613 --rc genhtml_legend=1 00:36:01.613 --rc geninfo_all_blocks=1 00:36:01.613 --rc geninfo_unexecuted_blocks=1 00:36:01.613 00:36:01.613 ' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.613 --rc genhtml_branch_coverage=1 00:36:01.613 --rc genhtml_function_coverage=1 00:36:01.613 --rc genhtml_legend=1 00:36:01.613 --rc geninfo_all_blocks=1 00:36:01.613 --rc geninfo_unexecuted_blocks=1 00:36:01.613 00:36:01.613 ' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.613 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.891 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.891 22:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:08.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:08.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:08.479 Found net devices under 0000:af:00.0: cvl_0_0 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:08.479 Found net devices under 0000:af:00.1: cvl_0_1 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:08.479 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:36:08.480 00:36:08.480 --- 10.0.0.2 ping statistics --- 00:36:08.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.480 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:36:08.480 00:36:08.480 --- 10.0.0.1 ping statistics --- 00:36:08.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.480 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 ************************************ 00:36:08.480 START TEST nvmf_target_disconnect_tc1 00:36:08.480 ************************************ 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.480 [2024-12-14 22:45:28.541364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.480 [2024-12-14 22:45:28.541469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2362590 with addr=10.0.0.2, port=4420 00:36:08.480 [2024-12-14 22:45:28.541531] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:08.480 [2024-12-14 22:45:28.541557] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:08.480 [2024-12-14 22:45:28.541576] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:08.480 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:08.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:08.480 Initializing NVMe Controllers 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.480 00:36:08.480 real 0m0.115s 00:36:08.480 user 0m0.044s 00:36:08.480 sys 0m0.070s 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 ************************************ 00:36:08.480 END TEST nvmf_target_disconnect_tc1 00:36:08.480 ************************************ 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 ************************************ 00:36:08.480 START TEST nvmf_target_disconnect_tc2 00:36:08.480 ************************************ 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542305 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542305 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542305 ']' 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 [2024-12-14 22:45:28.671664] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:08.480 [2024-12-14 22:45:28.671703] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.480 [2024-12-14 22:45:28.746718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:08.480 [2024-12-14 22:45:28.769295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.480 [2024-12-14 22:45:28.769333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.480 [2024-12-14 22:45:28.769339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.480 [2024-12-14 22:45:28.769345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.480 [2024-12-14 22:45:28.769350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.480 [2024-12-14 22:45:28.770885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:08.480 [2024-12-14 22:45:28.770993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:08.480 [2024-12-14 22:45:28.771103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:08.480 [2024-12-14 22:45:28.771104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 Malloc0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 [2024-12-14 22:45:28.927250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.480 [2024-12-14 22:45:28.952249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.480 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.481 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.481 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=542328 00:36:08.481 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:08.481 22:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:10.404 22:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 542305 00:36:10.404 22:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Read completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.404 Write completed with error (sct=0, sc=8) 00:36:10.404 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 [2024-12-14 22:45:30.979304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 [2024-12-14 22:45:30.979513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 [2024-12-14 22:45:30.979709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.405 [2024-12-14 22:45:30.979869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-12-14 22:45:30.979889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Read completed with error (sct=0, sc=8) 00:36:10.405 starting I/O failed 00:36:10.405 Write completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 Read completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 Write completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 Read completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 Write completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 Read completed with error (sct=0, sc=8) 00:36:10.406 starting I/O failed 00:36:10.406 [2024-12-14 22:45:30.980091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.406 [2024-12-14 22:45:30.980266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.980288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.980523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.980536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.980751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.980783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.981051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.981088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.981274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.981307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.981506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.981550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.981707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.981718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.981870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.981879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.982099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.982133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.982275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.982306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.982502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.982534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.982809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.982957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.982969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.983116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.983125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.983207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.983216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.983355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.983366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.983587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.983598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.983842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.983873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.984098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.984156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.984449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.984483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.984724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.984734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.984955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.984989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.985166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.985197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.985381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.985413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.985626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.985657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.985923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.985957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.986131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.986163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.986270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.986301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.986444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.986476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.986706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.986738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.986964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.987205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.987236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.987417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.987447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.987664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.987696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.987878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.987916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.988174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.988205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-12-14 22:45:30.988400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-12-14 22:45:30.988432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.988545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.988575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.988778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.989017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.989050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.989218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.989248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.989430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.989721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.989752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.990033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.990066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.990258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.990289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.990476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.990508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.990777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.990963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.990996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.991256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.991287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.991524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.991555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.991726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.991757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.992042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.992075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.992292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.992476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.992506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.992711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.992743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.992885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.992932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.993115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.993148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.993357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.993388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.993654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.993685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.993866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.993898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.994169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.994201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.994407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.994439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.994736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.994768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.994978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.995011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.995202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.995520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.995551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.995718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.995749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.995968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.996000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.996271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.996302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.996534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.996565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.996833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.996865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.997025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.997058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.997255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.997287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.997488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.997519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.997700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.997731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-12-14 22:45:30.997981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-12-14 22:45:30.998014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.998231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.998485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.998516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.998836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.999050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.999083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.999227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.999258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.999464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.999500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:30.999711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:30.999742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.000022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.000055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.000328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.000360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.000483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.000514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.000648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.000679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.000921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.000953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.001350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.001381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.001594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.001625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.001892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.001933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.002211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.002243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.002363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.002395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.002654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.002685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.002976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.003009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.003135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.003166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.003351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.003383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.003589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.003621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.003831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.003862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.004158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.004191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.004310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.004342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.004538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.004569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.004825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.004856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.005146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.005180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.005454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.005485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.005770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.005802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.006037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.006071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.006264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.006296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.006533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.006565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.006849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.006881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.007184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.007217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.007426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.007458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.007703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.007734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-12-14 22:45:31.007946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-12-14 22:45:31.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.008236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.008268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.008556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.008587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.008856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.008888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.009954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.009988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.010126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.010157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.010416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.010448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.010642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.010674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.010881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.010920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.011023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.011055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.011259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.011290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.011528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.011560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.011797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.011829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.012083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.012116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.012299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.012330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.012616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.012648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.012922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.012955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.013154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.013186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.013442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.013473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.013760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.013792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.014026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.014058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.014263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.014294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.014469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.014500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.014690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.014720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.014947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.014978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.015153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.015383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-12-14 22:45:31.015414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-12-14 22:45:31.015654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.015685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.015858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.015890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.016096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.016129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.016311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.016342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.016532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.016564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.016687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.016719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.016922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.016954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.017135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.017167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.017357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.017389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.017568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.017617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.017786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.017818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.018003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.018037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.018208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.018238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.018460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.018587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.018618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.018808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.018844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.019152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.019184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.019423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.019602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.019634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.019778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.019809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.020104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.020394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.020425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.020682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.020714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.020911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.020944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.021167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.021348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.021380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.021639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.021670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.021798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.021830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.022010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.022044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.022222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.022253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.022426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.022458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.022640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.022673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.022855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.022886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.023784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-12-14 22:45:31.023815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-12-14 22:45:31.024026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.024059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.024253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.024284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.024455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.024488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.024718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.024751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.024951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.025175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.025207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.025387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.025430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.025619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.025651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.025767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.025800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.025969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.026125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.026413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.026633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.026781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.026932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.026965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.027146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.027177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.027296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.027333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.027454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.027485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.027686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.027717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.027931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.027964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.028135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.028165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.028372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.028403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.028528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.028559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.028739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.028770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.028960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.028992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.029228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.029259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.029519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.029550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.029726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.029758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.030015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.030223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.030254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.030495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.030527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.030722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.030754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.030956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.030990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.031238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.031269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.031441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.031592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.031622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.031812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.031843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.032095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.032129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.032301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.032332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-12-14 22:45:31.032516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-12-14 22:45:31.032547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.032815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.032846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.033038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.033070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.033247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.033531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.033602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.033866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.033918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.034215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.034248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.034512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.034544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.034782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.034813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.035087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.035121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.035328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.035360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.035545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.035576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.035690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.035721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.035840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.036215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.036314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.036524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.036559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.036694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.036726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.036924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.036958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.037133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.037289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.037446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.037604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.037851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.037987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.038022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.038207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.038238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.038481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.038517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.038698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.038731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.038923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.038956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.039140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.039172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.039354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.039386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.039517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.039549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.039817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.040160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.040198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.040381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.040412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.040592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.040625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.040800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.040831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.040958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.040993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.041190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.041221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.041353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.041385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.041601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-12-14 22:45:31.041633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-12-14 22:45:31.041808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.041839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.042026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.042060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.042271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.042303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.042492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.042524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.042715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.042746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.042873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.042919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.043159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.043191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.043363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.043394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.043582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.043614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.043849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.044043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.044075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.044208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.044240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.044434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.044466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.044585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.044616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.044810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.045041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.045075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.045277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.045308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.045572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.045603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.045845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.045884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.046169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.046201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.046378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.046410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.046590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.046621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.046862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.046894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.047094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.047127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.047244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.047275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.047393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.047424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.047546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.047579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.047781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.047817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.048019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.048053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.048183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.048215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.048404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.048611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.048643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.048864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.048897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.049089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.049121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-12-14 22:45:31.049241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-12-14 22:45:31.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.049467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.049498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.049617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.049650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.049849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.049880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.050098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.050130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.050418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.050612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.050643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.050947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.051170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.051448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.051478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.051686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.051718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.051853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.051890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.052084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.052117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.052333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.052363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.052535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.052567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.052809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.052841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.053055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.053244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.053276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.053402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.053433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.053691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.053723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.053918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.053951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.054200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.054231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.054405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.054436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.054691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.054723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.054895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.054936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.055058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.055090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.055347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.055378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.055496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.055527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.055651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.055682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.055866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.055899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.056185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.056217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.056416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.056447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.056684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.056950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.057076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.057107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.057282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.057312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.057484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.057515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.057685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.057716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.057894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.057936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.058142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-12-14 22:45:31.058173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-12-14 22:45:31.058438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.058470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.058651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.058683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.058871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.058911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.059037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.059067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.059352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.059383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.059495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.059526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.059718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.059750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.059943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.059976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.060117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.060148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.060409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.060441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.060627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.060657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.060843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.060874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.061003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.061041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.061226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.061259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.061396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.061427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.061598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.061629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.061866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.061898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.062087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.062119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.062304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.062335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.062505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.062537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.062658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.062688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.062877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.062917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.063943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.063976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.064186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.064218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.064483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.064515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.064712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.064744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.064920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.064952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.065125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.065156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.065346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.065615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.065647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.065748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.065779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.066013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.066188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.066220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.066459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.066492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.066729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-12-14 22:45:31.066768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-12-14 22:45:31.066941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.066974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.067081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.067112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.067225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.067256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.067446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.067478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.067667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.067699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.067887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.067937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.068056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.068087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.068263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.068295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.068494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.068526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.068697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.068729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.068848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.068878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.069150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.069183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.069370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.069401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.069622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.069654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.069778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.069809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.069988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.070020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.070143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.070173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.070355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.070387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.070496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.070712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.070743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.071000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.071032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.071232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.071263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.071503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.071534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.071671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.071835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.071867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.072081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.072116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.072220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.072251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.072499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.072532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.072718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.072999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.073033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.073220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.073252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.073458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.073661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.073693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.073868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.073899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.074190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.074222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.074360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.074391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.074644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.074675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.074939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.075078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.075109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-12-14 22:45:31.075283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-12-14 22:45:31.075314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.075560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.075598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.075834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.075866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.076107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.076141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.076346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.076377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.076565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.076597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.076838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.076869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.077122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.077155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.077341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.077373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.077568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.077600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.077803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.077835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.078025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.078059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.078316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.078347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.078565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.078597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.078741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.078925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.078958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.079193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.079226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.079407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.079439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.079632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.079663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.079783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.079813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.079995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.080142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.080535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.080682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.080922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.080956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.081223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.081255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.081384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.081416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.081626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.081658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.081948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.082085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.082338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.082526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.082559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.082685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.082718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.082922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.082956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.083105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.083244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.083388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.083609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-12-14 22:45:31.083811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-12-14 22:45:31.083999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.084034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.084280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.084519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.084591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.084744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.084780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.085023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.085059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.085228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.085420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.085453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.085690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.085722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.085895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.085943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.086133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.086166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.086354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.086386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.086579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.086610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.086793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.086825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.087017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.087052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.087236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.087268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.087454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.087502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.087677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.087710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.087886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.087931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.088105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.088137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.088319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.088349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.088544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.088576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.088689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.088720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.088861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.088893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.089087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.089120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.089342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.089373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.089489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.089520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.089762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.090081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.090114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.090294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.090326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.090570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.090603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.090711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.090743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.090959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.090992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.091187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.091219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.091398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.091430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-12-14 22:45:31.091633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-12-14 22:45:31.091665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.091788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.091819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.091927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.091960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.092142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.092174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.092294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.092325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.092436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.092468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.092585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.092616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.092735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.092767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.093014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.093086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.093299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.093539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.093815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.093847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.094145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.094179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.094355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.094387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.094598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.094630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.094889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.094931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.095070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.095101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.095361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.095392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.095632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.095664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.095853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.095885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.096097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.096129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.096316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.096363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.096602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.096635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.096959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.097148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.097352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.097384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.097636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.097668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.097863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.098067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.098281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.098487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.098635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.098854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.098990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.099214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.099367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.099635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.099929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-12-14 22:45:31.099962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-12-14 22:45:31.100246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.100277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.100489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.100521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.100619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.100650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.100852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.100883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.101064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.101360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.101392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.101649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.101868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.102938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.102970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.103096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.103128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.103305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.103337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.103462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.103493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.103689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.103720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.103954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.104260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.104292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.104568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.104600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.104702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.104733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.104860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.104891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.105135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.105168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.105358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.105390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.105624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.105826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.105857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.106043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.106076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.106206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.106238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.106498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.106529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.106784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.106815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.107052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.107085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.107280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.107312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.107493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.107524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.107775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.107806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.107931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.107965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.108239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.108271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.108398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.108430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.108532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.108563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-12-14 22:45:31.108835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-12-14 22:45:31.108867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.109080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.109112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.109283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.109315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.109432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.109462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.109664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.109695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.109932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.109965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.110155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.110391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.110423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.110698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.110729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.110913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.110946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.111124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.111161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.111365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.111396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.111519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.111550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.111663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.111694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.111934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.111967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.112173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.112204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.112456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.112488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.112726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.112757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.112888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.112929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.113067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.113098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.113269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.113299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.113488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.113519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.113712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.113743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.113922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.113954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.114149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.114181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.114423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.114455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.114659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.114690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.114809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.114840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.115027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.115165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.115195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.115388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.115419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.115586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.115618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.115885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.115949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.116067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.116099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.116280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.116311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.116572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.116604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.116774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.116805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.116981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.117014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.117189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-12-14 22:45:31.117425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-12-14 22:45:31.117456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.117631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.117662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.117944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.118131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.118162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.118296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.118496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.118527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.118642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.118674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.118871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.118909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.119083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.119114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.119300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.119332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.119569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.119599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.119863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.119900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.120095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.120127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.120314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.120345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.120561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.120592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.120800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.120831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.120955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.120988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.121171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.121202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.121379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.121410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.121609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.121640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.121829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.121860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.122078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.122110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.122225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.122256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.122444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.122475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.122605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.122636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.122882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.122923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.123141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.123172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.123363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.123394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.123570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.123601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.123776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.123808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.123946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.123978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.124212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.124244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.124426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.124457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.124643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.124675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.422 [2024-12-14 22:45:31.124959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.422 [2024-12-14 22:45:31.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.422 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.125193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.125225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.125351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.125381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.125486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.125518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.125705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.125736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.125915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.125947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.126052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.126082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.126206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.126237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.126498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.126529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.126716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.126747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.126879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.127156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.127187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.127376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.127407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.127648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.127680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.127901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.127940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.128151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.128183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.128305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.128336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.128465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.128501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.128683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.128715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.128842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.128873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.129152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.129184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.129304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.129335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.129520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.129551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.129816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.129848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.130031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.130350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.130381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.130496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.130527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.130728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.130759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.130866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.130898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.131174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.131206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.131470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.131501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.131765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.132937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.133110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.133141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.423 [2024-12-14 22:45:31.133333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.423 [2024-12-14 22:45:31.133364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.423 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.133496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.133528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.133717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.133932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.133965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.134144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.134175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.134284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.134315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.134492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.134523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.134786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.134818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.134935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.134967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.135245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.135276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.135532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.135563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.135694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.135725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.135861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.135892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.136073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.136105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.136278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.136309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.136429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.136461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.136646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.136676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.136876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.137071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.137108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.137229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.137260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.137428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.137651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.137683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.137880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.137923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.138189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.138482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.138512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.138773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.138805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.138993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.139026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.139244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.139470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.139501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.139602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.139633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.139814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.139846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.139992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.140024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.140221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.140252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.140368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.140400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.140662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.140693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.140966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.140998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.141221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.141253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.141463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.141493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.141702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.141734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.141997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.424 [2024-12-14 22:45:31.142030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.424 qpair failed and we were unable to recover it. 00:36:10.424 [2024-12-14 22:45:31.142209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.142240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.142424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.142456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.142689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.142721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.142893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.142937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.143130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.143161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.143373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.143405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.143538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.143569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.143754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.143786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.144114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.144261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.144530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.144750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.144964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.144996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.145129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.145160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.145347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.145378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.145554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.145585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.145767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.145798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.145976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.146014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.146197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.146228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.146433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.146465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.146647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.146679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.146868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.146898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.147031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.147062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.147327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.147358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.147617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.147650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.147895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.148200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.148232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.148350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.148504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.148706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.148738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.148944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.148977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.149165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.149196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.149302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.149333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.149453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.149485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.149604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.149636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.149843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.150021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.150054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.150310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.425 [2024-12-14 22:45:31.150342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.425 qpair failed and we were unable to recover it. 00:36:10.425 [2024-12-14 22:45:31.150468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.150615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.150646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.150920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.150952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.151212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.151243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.151377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.151408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.151589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.151621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.151885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.152017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.152049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.152185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.152217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.152432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.152463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.152643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.152674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.152858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.152890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.153022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.153174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.153206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.153395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.153427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.153608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.153835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.153867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.154000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.154037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.154210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.154241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.154370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.154407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.154598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.154629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.154826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.154858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.155960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.155993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.156255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.156286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.156461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.156493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.156686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.156717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.156981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.157014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.157294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.157325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.157519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.157550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.157729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.157760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.157999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.158032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.158292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.158322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.158510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.158542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.158805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.426 [2024-12-14 22:45:31.158836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.426 qpair failed and we were unable to recover it. 00:36:10.426 [2024-12-14 22:45:31.159024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.159056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.159246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.159277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.159406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.159437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.159628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.159659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.159832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.159863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.159976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.160009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.160196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.160228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.160415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.160447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.160631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.160662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.160833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.160864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.161139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.161325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.161511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.161542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.161675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.161826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.161857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.162054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.162086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.162265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.162296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.162625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.162656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.162762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.162797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.163098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.163139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.163405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.163437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.163575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.163607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.163775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.163806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.164947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.164980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.165161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.165192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.165365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.165396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.165659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.165930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.427 [2024-12-14 22:45:31.165963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.427 qpair failed and we were unable to recover it. 00:36:10.427 [2024-12-14 22:45:31.166138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.166169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.166283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.166314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.166485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.166517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.166618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.166839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.166870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.167008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.167041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.167158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.167190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.167400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.167431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.167566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.167596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.167851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.167882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.168951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.168985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.169157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.169188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.169310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.169340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.169574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.169605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.169739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.169771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.169878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.169917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.170185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.170216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.170417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.170448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.170661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.170692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.170941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.170973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.171088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.171119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.171328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.171364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.171550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.171581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.171718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.171750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.171866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.171897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.172098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.172130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.172367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.172397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.172508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.172540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.172676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.172707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.172952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.172986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.173170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.173202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.173378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.173410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.173580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.173610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.173849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.173881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.428 qpair failed and we were unable to recover it. 00:36:10.428 [2024-12-14 22:45:31.174100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.428 [2024-12-14 22:45:31.174132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.174327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.174358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.174548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.174579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.174794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.174826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.175004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.175037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.175170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.175202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.175387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.175654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.175787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.176027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.176061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.176249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.176281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.176461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.176492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.176613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.176644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.176823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.176854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.177140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.177173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.177290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.177322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.177512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.177715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.177745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.177924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.177957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.178127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.178158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.178399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.178431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.178626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.178657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.178957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.179140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.179171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.179383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.179415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.179590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.179622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.179802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.179833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.180109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.180392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.180424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.180558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.180589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.180840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.180871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.181076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.181108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.181300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.181331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.181454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.181486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.181602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.181633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.181879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.181920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.182160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.182303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.182334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.182543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.182574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.429 qpair failed and we were unable to recover it. 00:36:10.429 [2024-12-14 22:45:31.182681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.429 [2024-12-14 22:45:31.182712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.182842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.182872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.183152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.183311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.183341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.183567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.183598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.183728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.183759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.183878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.183940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.184064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.184095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.184288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.184319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.184491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.184523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.184642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.184673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.184931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.184985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.185181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.185214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.185395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.185426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.185664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.185695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.185889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.185934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.186133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.186165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.186303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.186335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.186456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.186487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.186669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.186701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.186811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.186842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.187017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.187050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.187247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.187278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.187401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.187432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.187618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.187649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.187832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.187863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.188049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.188082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.188324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.188590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.188627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.188825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.188983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.189015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.189133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.189164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.189405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.189436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.189540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.189570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.189767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.189799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.189991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.190024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.190226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.190257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.190612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.190644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.430 [2024-12-14 22:45:31.190882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.430 [2024-12-14 22:45:31.190923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.430 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.191058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.191270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.191302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.191552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.191584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.191847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.191879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.192078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.192111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.192350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.192380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.192548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.192579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.192793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.192825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.192960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.192992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.193182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.193214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.193383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.193414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.193706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.193974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.194007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.194194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.194425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.194455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.194661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.194693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.194806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.194837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.195050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.195296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.195327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.195511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.195543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.195719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.195750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.195949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.196093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.196125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.196305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.196336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.196465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.196496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.196759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.196789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.196976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.197009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.197196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.197227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.197432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.197469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.197644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.197676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.197883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.197941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.198126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.198158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.198393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.198605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.198636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.431 qpair failed and we were unable to recover it. 00:36:10.431 [2024-12-14 22:45:31.198878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.431 [2024-12-14 22:45:31.198920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.199126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.199157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.199272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.199302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.199477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.199508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.199713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.199745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.199864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.199895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.200059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.200091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.200205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.200236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.200421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.200452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.200629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.200660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.200845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.200877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.201150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.201182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.201369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.201400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.201601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.201632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.201889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.201932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.202195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.202227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.202397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.202429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.202625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.202656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.202864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.202894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.203939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.203972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.204094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.204125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.204296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.204326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.204449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.204481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.204672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.204703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.204954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.204988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.205181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.205212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.205476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.205507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.205799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.205831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.206015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.206047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.206316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.206353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.206486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.206517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.206648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.206679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.206925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.206959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.207097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.207129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.432 [2024-12-14 22:45:31.207368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.432 [2024-12-14 22:45:31.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.432 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.207572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.207603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.207728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.207759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.207956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.207989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.208190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.208222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.208463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.208494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.208621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.208652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.208830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.208861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.209092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.209314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.209448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.209594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.209801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.209973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.210006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.210141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.210172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.210407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.210438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.210561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.210593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.210817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.211103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.211137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.211386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.211417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.211543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.211574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.211814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.211998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.212032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.212278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.212309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.212565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.212596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.212721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.212754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.212959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.212992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.213182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.213213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.213398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.213430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.213617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.213648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.213828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.213859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.214049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.214082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.214318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.214349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.214620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.214769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.214939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.214973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.215158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.215189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.215321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.215352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.215477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.215508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.433 [2024-12-14 22:45:31.215716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.433 [2024-12-14 22:45:31.215748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.433 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.215847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.215879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.216120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.216284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.216427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.216636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.216799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.216998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.217132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.217349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.217599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.217765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.217964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.217997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.218182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.218213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.218403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.218434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.218559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.218589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.218770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.218799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.218989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.219020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.219202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.219232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.219413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.219444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.219629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.219658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.219844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.219875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.220090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.220121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.220235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.220270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.220476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.220507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.220731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.220901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.220942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.221112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.221143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.221244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.221275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.221463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.221493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.221756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.221785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.221910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.221941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.222951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.222984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.223250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.223458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.434 [2024-12-14 22:45:31.223489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.434 qpair failed and we were unable to recover it. 00:36:10.434 [2024-12-14 22:45:31.223679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.223708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.223915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.224151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.224181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.224301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.224330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.224517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.224547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.224672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.224702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.224882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.225126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.225156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.225323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.225354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.225642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.225825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.225857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.225981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.226014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.226222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.226253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.226404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.226581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.226613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.226874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.226916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.227128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.227159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.227394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.227425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.227673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.227705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.227849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.227879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.228061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.228290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.228322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.228556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.228586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.228779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.228817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.229004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.229038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.229290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.229322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.229495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.229528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.229770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.229802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.229991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.230024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.230310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.230446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.230478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.230591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.230622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.230800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.230831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.231093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.231125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.231386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.231417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.231544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.231575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.231747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.231778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.231959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.231992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.435 [2024-12-14 22:45:31.232180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.435 [2024-12-14 22:45:31.232211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.435 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.232339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.232370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.232557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.232589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.232713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.232744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.232925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.232958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.233078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.233110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.233310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.233341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.233449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.233480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.233671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.233703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.233873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.233914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.234138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.234355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.234387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.234532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.234566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.234813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.234848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.235071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.235104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.235316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.235348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.235456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.235722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.235753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.235856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.235887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.236110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.236142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.236426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.236554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.236586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.236708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.236739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.236939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.236972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.237249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.237384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.237422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.237543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.237575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.237695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.237727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.237919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.237959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.436 [2024-12-14 22:45:31.238932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.436 [2024-12-14 22:45:31.238966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.436 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.239202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.239234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.239403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.239434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.239606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.239638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.239816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.239849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.239977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.240009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.240251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.240284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.240397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.240428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.240683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.240714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.240837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.240867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.241879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.241929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.242078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.242292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.242453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.242665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.242798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.242969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.243108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.243341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.243505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.243728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.243873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.243914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.244162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.244368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.244575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.244708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.244852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.244983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.245126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.245336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.245487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.245693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.245857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.245888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.246157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.246189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.246308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.437 [2024-12-14 22:45:31.246339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.437 qpair failed and we were unable to recover it. 00:36:10.437 [2024-12-14 22:45:31.246551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.246583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.246689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.246719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.246846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.246877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.247065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.247098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.247351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.247381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.247626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.247658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.247778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.247809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.247922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.247955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.248063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.248094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.248308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.248338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.248601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.248632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.248763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.248967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.249852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.250137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.250290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.250494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.250701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.250917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.250952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.251093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.251296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.251498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.251650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.251801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.251982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.252014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.252132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.252162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.252281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.252318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.252510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.252542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.252730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.252762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.253033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.253190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.253477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.438 [2024-12-14 22:45:31.253628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.438 qpair failed and we were unable to recover it. 00:36:10.438 [2024-12-14 22:45:31.253805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.253836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.254074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.254243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.254456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.254683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.254851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.254972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.255109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.255259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.255421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.255639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.255845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.255875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.256122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.256154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.256278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.256309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.256628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.256659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.256844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.256875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.257096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.257128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.257369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.257399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.257639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.257670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.257807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.257838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.258102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.258135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.258373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.258404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.258544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.258575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.258679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.258710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.258895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.259128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.259160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.259332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.259363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.259487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.259519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.259778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.259809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.259925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.260075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.260107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.260217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.260248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.260493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.260531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.260670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.260701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.260925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.261065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.261246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.261277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.261402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.261434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.261609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.439 [2024-12-14 22:45:31.261640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.439 qpair failed and we were unable to recover it. 00:36:10.439 [2024-12-14 22:45:31.261753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.261784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.262027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.262061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.262282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.262502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.262534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.262719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.262749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.262929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.262963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.263071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.263103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.263294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.263325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.263493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.263526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.263709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.263740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.263964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.263997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.264237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.264268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.264438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.264600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.264631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.264819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.264850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.265150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.265286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.265459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.265672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.265839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.265987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.266020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.266199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.266230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.266415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.266446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.266646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.266677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.266923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.266956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.267939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.267972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.440 [2024-12-14 22:45:31.268153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.440 [2024-12-14 22:45:31.268184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.440 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.268289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.268322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.268429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.268466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.268669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.268701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.268882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.268932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.269112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.269144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.269336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.269367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.269497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.269528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.269740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.269771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.269897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.269938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.270109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.270141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.270405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.270437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.270651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.270887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.270925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.271055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.271087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.722 qpair failed and we were unable to recover it. 00:36:10.722 [2024-12-14 22:45:31.271205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.722 [2024-12-14 22:45:31.271236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.271417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.271449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.271575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.271606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.271798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.271830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.272884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.272945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.273889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.274111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.274142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.274406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.274438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.274554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.274586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.274769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.274801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.274979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.275011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.275181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.275213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.275452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.275483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.275652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.275684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.275913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.276105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.276137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.276308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.276528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.276565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.276748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.276779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.277042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.277075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.277265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.277297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.277473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.277504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.277746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.277955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.277988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.278175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.278206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.278407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.278605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.278637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.278823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.278855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.279028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.723 [2024-12-14 22:45:31.279061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.723 qpair failed and we were unable to recover it. 00:36:10.723 [2024-12-14 22:45:31.279235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.279267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.279504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.279535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.279790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.279821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.280939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.280972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.281136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.281306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.281534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.281566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.281695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.281727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.281908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.281941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.282059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.282091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.282220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.282252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.282496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.282526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.282723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.282754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.283018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.283052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.283160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.283191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.283364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.283396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.283498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.283529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.283764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.284959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.284999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.285235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.285266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.285535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.285647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.285678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.285844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.285876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.286009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.286041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.286154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.286186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.286380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.286411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.286653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.286685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.286817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.286849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.287046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.724 [2024-12-14 22:45:31.287078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.724 qpair failed and we were unable to recover it. 00:36:10.724 [2024-12-14 22:45:31.287341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.287372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.287562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.287593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.287799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.287996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.288029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.288209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.288241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.288417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.288619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.288650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.288886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.288930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.289141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.289174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.289350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.289382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.289520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.289552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.289725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.289756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.289963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.289995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.290146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.290322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.290353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.290541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.290728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.290912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.290947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.291183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.291214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.291407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.291438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.291696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.291727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.291842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.291874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.292088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.292120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.292381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.292412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.292656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.292687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.292956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.292989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.293180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.293211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.293420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.293451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.293690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.293721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.293849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.293886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.294082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.294114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.294301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.294332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.294466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.294497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.294759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.294790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.294963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.294996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.295127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.295158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.295332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.295364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.295569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.295600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.295861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.725 [2024-12-14 22:45:31.295893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.725 qpair failed and we were unable to recover it. 00:36:10.725 [2024-12-14 22:45:31.296106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.296333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.296365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.296492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.296523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.296780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.296811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.297024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.297058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.297304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.297336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.297523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.297555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.297760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.297792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.298047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.298079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.298344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.298375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.298571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.298601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.298783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.298815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.299051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.299085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.299282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.299314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.299492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.299523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.299693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.299724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.299858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.299890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.300033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.300066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.300271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.300303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.300435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.300466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.300653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.300684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.300941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.300974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.301092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.301124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.301255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.301287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.301554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.301656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.301687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.301889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.301929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.302054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.302085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.302287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.302319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.302446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.302477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.302676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.302712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.302819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.302850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.303045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.303078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.303250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.303281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.303463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.303494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.303614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.303645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.303859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.303889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.726 [2024-12-14 22:45:31.304112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.726 [2024-12-14 22:45:31.304145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.726 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.304334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.304366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.304507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.304538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.304660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.304691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.304861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.304892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.305947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.305981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.306106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.306139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.306352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.306383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.306618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.306650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.306768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.306799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.306978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.307146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.307285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.307497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.307639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.307875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.307914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.308915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.308948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.309145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.309177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.309415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.309447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.309715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.309747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.309957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.309991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.310250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.310281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.310398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.310429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.310557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.310593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.310783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.727 qpair failed and we were unable to recover it. 00:36:10.727 [2024-12-14 22:45:31.310935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.727 [2024-12-14 22:45:31.310968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.311152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.311183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.311385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.311417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.311589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.311620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.311795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.311942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.311976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.312215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.312247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.312350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.312381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.312600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.312771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.312802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.313049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.313082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.313319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.313350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.313480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.313512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.313685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.313717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.313911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.313947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.314192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.314223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.314398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.314429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.314611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.314642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.314828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.314860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.315114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.315146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.315340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.315371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.315568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.315599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.315769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.315801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.315973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.316005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.316196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.316226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.316448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.316703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.316734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.316847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.316879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.317959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.317992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.318106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.318137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.318399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.318430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.318531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.318562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.318750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.318974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-12-14 22:45:31.319205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.728 [2024-12-14 22:45:31.319237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.319440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.319556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.319587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.319793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.319979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.320152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.320183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.320298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.320329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.320570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.320602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.320740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.320772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.320987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.321019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.321217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.321249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.321436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.321467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.321573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.321604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.321819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.322055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.322087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.322270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.322301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.322474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.322505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.322671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.322702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.322963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.322995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.323170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.323202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.323388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.323420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.323672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.323703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.323874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.323914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.324233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.324363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.324394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.324656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.324913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.324951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.325163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.325195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.325384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.325603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.325634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.325845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.325985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.326017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.326204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.326234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.326486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.326518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.326698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.326729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.326842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.326874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.327073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.327105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.327280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.327313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.327497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.327528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-12-14 22:45:31.327716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.729 [2024-12-14 22:45:31.327747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.327862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.327894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.328076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.328107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.328219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.328251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.328438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.328469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.328591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.328621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.328803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.328834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.329128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.329368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.329530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.329664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.329801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.329978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.330008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.330144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.330174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.330363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.330392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.330578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.330607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.330783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.330814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.331050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.331081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.331253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.331282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.331439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.331700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.331729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.331922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.331952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.332145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.332175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.332357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.332386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.332626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.332657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.332826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.332856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.333079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.333204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.333240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.333453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.333483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.333684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.333714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.333958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.333990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.334165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.334195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.334321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.334351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.334533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.334757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.334786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.335044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.335075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.335264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.335294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.335401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.335433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.335600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.335631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-12-14 22:45:31.335747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.730 [2024-12-14 22:45:31.335779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.335953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.335986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.336234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.336265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.336374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.336405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.336619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.336932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.337062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.337093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.337283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.337315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.337498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.337529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.337702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.337733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.337925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.337957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.338135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.338360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.338392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.338600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.338631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.338749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.338780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.338978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.339011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.339268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.339300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.339550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.339581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.339841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.339873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.340004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.340036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.340248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.340279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.340488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.340520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.340728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.340759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.340992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.341024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.341214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.341245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.341425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.341456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.341661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.341693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.341971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.342004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.342226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.342262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.342523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.342723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.342754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.342946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.342978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.343171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.343202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.343393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.343424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.343596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.731 [2024-12-14 22:45:31.343626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-12-14 22:45:31.343862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.343893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.344100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.344132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.344308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.344339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.344521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.344551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.344721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.344753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.344924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.344956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.345080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.345112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.345292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.345324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.345528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.345559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.345762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.345794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.345943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.345976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.346107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.346255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.346286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.346413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.346445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.346637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.346669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.346876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.346927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.347041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.347073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.347361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.347392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.347675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.347706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.347897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.347942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.348158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.348189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.348452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.348483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.348654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.348686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.348910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.348943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.349075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.349351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.349383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.349639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.349670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.349845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.349876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.350067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.350099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.350274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.350568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.350778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.350809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.351069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.351102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.351241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.351283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.351458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.351489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.351772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.351802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.351920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.351952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.352159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.352189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.352310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.732 [2024-12-14 22:45:31.352342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-12-14 22:45:31.352580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.352611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.352778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.352809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.352976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.353009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.353195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.353226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.353413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.353444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.353729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.353761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.353996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.354029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.354226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.354257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.354498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.354530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.354742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.354774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.355034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.355066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.355262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.355294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.355477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.355508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.355621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.355652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.355846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.355877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.356029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.356302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.356482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.356514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.356635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.356666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.356845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.356875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.357174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.357319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.357461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.357978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.358011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.358112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.358144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.358329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.358360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.358537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.358760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.358791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.359061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.359094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.359266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.359297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.359509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.359755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.359785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.359980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.360018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.360149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.360181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.360288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.360319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.360574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.360604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.360818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.360850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-12-14 22:45:31.361041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.733 [2024-12-14 22:45:31.361072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.361267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.361298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.361418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.361449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.361688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.361719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.361944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.361976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.362188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.362220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.362340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.362642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.362673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.362788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.362819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.362975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.363216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.363247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.363420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.363451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.363629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.363661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.363927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.363960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.364199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.364230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.364417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.364448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.364633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.364663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.364787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.364819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.365008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.365041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.365223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.365254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.365518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.365550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.365726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.365757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.365978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.366150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.366372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.366509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.366679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.366918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.366951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.367085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.367117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.367296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.367326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.367511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.367543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.367823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.367854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.367981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.368204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.368236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.368405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.368614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.368650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.368835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.368866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.369171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.369204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.369411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.734 qpair failed and we were unable to recover it. 00:36:10.734 [2024-12-14 22:45:31.369618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.734 [2024-12-14 22:45:31.369649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.369819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.369851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.370066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.370098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.370210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.370241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.370459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.370645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.370676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.370885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.370930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.371143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.371174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.371411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.371443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.371657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.371687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.371820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.371851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.372143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.372175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.372381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.372413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.372515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.372546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.372808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.372839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.373027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.373061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.373254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.373285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.373471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.373501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.373673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.373704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.373883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.373920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.374133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.374307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.374338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.374586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.374617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.374810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.374841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.375164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.375280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.375311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.375521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.375552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.375736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.375768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.375870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.375901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.376150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.376182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.376301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.376331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.376581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.376613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.376786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.376817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.376941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.376975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.377166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.735 [2024-12-14 22:45:31.377198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.735 qpair failed and we were unable to recover it. 00:36:10.735 [2024-12-14 22:45:31.377364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.377395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.377652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.377689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.377862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.377893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.378171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.378202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.378429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.378460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.378636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.378666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.378924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.378957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.379199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.379231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.379495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.379526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.379717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.379747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.379931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.379964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.380179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.380209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.380395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.380605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.380637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.380808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.380838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.381019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.381052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.381307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.381576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.381709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.381878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.381918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.382116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.382148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.382322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.382354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.382492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.382522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.382728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.382850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.382881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.383134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.383166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.383285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.383317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.383492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.383748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.383818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.384043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.384081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.384261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.384295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.384500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.384532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.384828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.384861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.385047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.385081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.385265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.385297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.385505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.385537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.385659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.385875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.385918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.386104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.386137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.736 qpair failed and we were unable to recover it. 00:36:10.736 [2024-12-14 22:45:31.386263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.736 [2024-12-14 22:45:31.386295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.386558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.386589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.386771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.386803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.386944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.386978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.387161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.387192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.387455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.387487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.387616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.387648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.387917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.387949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.388144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.388176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.388300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.388331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.388510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.388542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.388717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.388864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.388895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.389090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.389122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.389296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.389327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.389502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.389534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.389740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.389778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.389952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.389986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.390178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.390209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.390398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.390429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.390668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.390700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.390938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.390971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.391233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.391264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.391452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.391606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.391637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.391747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.391780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.392045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.392078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.392267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.392298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.392545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.392576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.392770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.392802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.392927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.392976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.393174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.393205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.393383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.393414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.393519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.393549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.393675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.393707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.393890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.393932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.394039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.394070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.394240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.394272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.394446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.394478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.394730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.737 [2024-12-14 22:45:31.394763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.737 qpair failed and we were unable to recover it. 00:36:10.737 [2024-12-14 22:45:31.394936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.394969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.395140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.395431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.395462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.395650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.395686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.395817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.395848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.396063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.396104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.396233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.396264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.396525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.396557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.396748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.396779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.396923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.396954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.397073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.397103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.397301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.397333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.397570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.397601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.397789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.397820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.398082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.398117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.398234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.398265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.398465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.398494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.398620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.398653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.398924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.398957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.399200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.399232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.399488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.399520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.399650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.399681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.399897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.400103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.400135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.400346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.400378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.400586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.400616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.400725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.400755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.401048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.401272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.401419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.401643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.401800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.401991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.402024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.402198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.402229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.402343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.402374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.402648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.402679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.402869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.402900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.403159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.403190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.403331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.403363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.738 [2024-12-14 22:45:31.403510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.738 [2024-12-14 22:45:31.403542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.738 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.403781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.403812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.403997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.404030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.404164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.404400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.404430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.404711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.404749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.404863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.404893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.405947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.406256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.406288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.406485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.406517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.406620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.406651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.406827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.407040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.407072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.407259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.407289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.407400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.407430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.407635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.407668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.407872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.407912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.408087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.408118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.408377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.408408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.408585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.408615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.408793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.408824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.409020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.409055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.409249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.409465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.409497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.409672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.409703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.409824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.409854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.410122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.410152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.410283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.410319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.410581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.410612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.410806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.410839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.410981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.411134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.411302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.411511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.411654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.739 qpair failed and we were unable to recover it. 00:36:10.739 [2024-12-14 22:45:31.411879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.739 [2024-12-14 22:45:31.411918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.412957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.412993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.413113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.413145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.413262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.413296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.413561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.413593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.413784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.413817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.413940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.413972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.414149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.414183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.414354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.414388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.414595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.414628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.414786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.414997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.415030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.415242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.415512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.415543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.415752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.415947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.415981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.416219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.416253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.416468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.416503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.416692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.416725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.416896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.416943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.417084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.417117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.417287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.417410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.417443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.417563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.417596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.417775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.417808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.418000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.418166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.418493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.418675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.418714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.418822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.740 [2024-12-14 22:45:31.418855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.740 qpair failed and we were unable to recover it. 00:36:10.740 [2024-12-14 22:45:31.419050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.419084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.419262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.419296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.419480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.419513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.419714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.419831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.419866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.420153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.420188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.420323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.420356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.420533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.420566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.420739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.420771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.420896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.420937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.421120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.421154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.421294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.421328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.421536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.421569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.421762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.421794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.421922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.421954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.422168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.422201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.422313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.422347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.422531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.422563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.422733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.422765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.422892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.422936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.423053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.423085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.423329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.423363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.423541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.423572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.423691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.423723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.423995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.424030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.424168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.424200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.424391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.424425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.424606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.424638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.424757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.424789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.425039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.425074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.425270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.425302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.425504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.425537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.425678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.425712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.425852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.425884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.426101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.426135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.426306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.426339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.426523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.426732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.426766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.426944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.426978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.427118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.741 [2024-12-14 22:45:31.427150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.741 qpair failed and we were unable to recover it. 00:36:10.741 [2024-12-14 22:45:31.427281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.427314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.427508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.427541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.427718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.427751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.427876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.427918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.428184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.428218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.428338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.428372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.428499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.428532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.428650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.428854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.428887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.429039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.429392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.429608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.429825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.429998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.430033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.430151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.430184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.430306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.430576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.430609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.430796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.430830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.431074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.431108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.431302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.431335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.431598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.431633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.431751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.431783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.431958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.431994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.432125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.432158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.432295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.432327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.432447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.432487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.432597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.432803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.432837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.433025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.433058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.433238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.433272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.433535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.433567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.433737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.433770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.433974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.434009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.434152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.434184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.434318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.434352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.434591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.434624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.434807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.434840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.435064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.742 [2024-12-14 22:45:31.435235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.742 [2024-12-14 22:45:31.435267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.742 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.435534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.435569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.435783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.435816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.435991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.436166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.436323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.436466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.436743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.436962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.436996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.437189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.437221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.437346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.437378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.437565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.437600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.437781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.437814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.437930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.437964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.438091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.438124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.438335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.438368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.438577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.438609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.438796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.438830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.438942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.438976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.439942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.439976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.440169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.440378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.440412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.440587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.440620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.440796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.440834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.441016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.441051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.441171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.441203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.441376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.441409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.441584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.441618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.441798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.441831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.442089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.442125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.442306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.442340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.442603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.442759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.442792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.442970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.443004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.443140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.743 [2024-12-14 22:45:31.443173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.743 qpair failed and we were unable to recover it. 00:36:10.743 [2024-12-14 22:45:31.443293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.443326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.443441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.443475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.443646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.443823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.443857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.444954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.445174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.445206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.445433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.445558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.445592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.445833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.445865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.445984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.446018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.446257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.446296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.446476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.446509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.446638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.446671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.446887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.446932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.447217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.447250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.447444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.447478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.447604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.447637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.447849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.447884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.448105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.448139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.448328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.448363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.448540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.448573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.448754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.448788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.449011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.449046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.449313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.449345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.449471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.449505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.449706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.449740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.449877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.449921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.450189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.450223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.450329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.450361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.450467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.744 [2024-12-14 22:45:31.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.744 qpair failed and we were unable to recover it. 00:36:10.744 [2024-12-14 22:45:31.450613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.450646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.450824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.450858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.451086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.451121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.451228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.451261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.451473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.451505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.451683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.451717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.451840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.451874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.452076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b5e0 is same with the state(6) to be set 00:36:10.745 [2024-12-14 22:45:31.452391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.452463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.452604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.452641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.452783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.452817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.453060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.453097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.453217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.453511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.453544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.453786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.453820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.453945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.453980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.454159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.454201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.454319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.454353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.454523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.454557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.454739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.454773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.454916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.454951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.455117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.455160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.455338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.455379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.455551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.455584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.455710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.455743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.455934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.455970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.456188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.456221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.456426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.456643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.456885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.456929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.457957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.457993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.458234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.458267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.458540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.458573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.458700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.458734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.745 [2024-12-14 22:45:31.458941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.745 [2024-12-14 22:45:31.458976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.745 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.459241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.459390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.459424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.459531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.459565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.459744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.459777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.459970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.460006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.460198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.460231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.460362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.460396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.460563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.460596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.460843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.460877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.461020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.461053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.461264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.461483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.461516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.461699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.461732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.461925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.461959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.462131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.462164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.462354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.462387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.462499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.462531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.462710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.462743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.462922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.462956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.463185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.463219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.463344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.463377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.463504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.463542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.463728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.463762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.463931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.463965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.464108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.464241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.464274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.464498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.464680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.464713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.464911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.464944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.465075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.465109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.465286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.465319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.465501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.465534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.465711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.465742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.465852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.466033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.466066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.466186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.466219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.466392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.466426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.466543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.466575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.746 [2024-12-14 22:45:31.466837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.746 [2024-12-14 22:45:31.466871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.746 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.467136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.467171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.467284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.467316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.467539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.467572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.467689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.467721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.467829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.467862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.468005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.468038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.468304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.468337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.468626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.468659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.468948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.468988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.469113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.469147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.469329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.469362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.469535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.469568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.469790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.469821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.469944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.469978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.470100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.470323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.470355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.470544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.470576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.470839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.470871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.470990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.471136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.471168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.471357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.471391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.471602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.471634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.471746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.471778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.471968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.472004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.472187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.472221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.472394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.472427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.472691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.472725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.472919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.472953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.473192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.473226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.473488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.473521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.473701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.473735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.473873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.473927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.474194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.474227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.474398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.474430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.474551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.474719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.474925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.474960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.475087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.747 [2024-12-14 22:45:31.475119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.747 qpair failed and we were unable to recover it. 00:36:10.747 [2024-12-14 22:45:31.475299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.475333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.475464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.475496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.475678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.475710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.475815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.475846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.476039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.476074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.476270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.476304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.476557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.476591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.476766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.476799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.477007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.477220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.477253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.477388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.477420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.477597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.477636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.477833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.477866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.478140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.478174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.478366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.478399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.478527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.478559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.478687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.478718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.478914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.478948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.479140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.479173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.479313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.479345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.479556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.479588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.479724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.479754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.479898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.480120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.480154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.480371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.480402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.480535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.480568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.480785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.480818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.481008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.481043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.481214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.481248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.481446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.481479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.481583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.481616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.481802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.481834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.482014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.482048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.482233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.482266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.482444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.482477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.482658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.482690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.482806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.482838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.483026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.483061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.483184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.748 [2024-12-14 22:45:31.483216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.748 qpair failed and we were unable to recover it. 00:36:10.748 [2024-12-14 22:45:31.483391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.483424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.483598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.483631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.483819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.483965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.483997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.484172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.484204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.484377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.484411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.484652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.484685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.484872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.484916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.485085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.485119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.485234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.485266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.485388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.485421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.485586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.485806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.485845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.486060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.486096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.486433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.486464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.486705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.486739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.486989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.487023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.487216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.487250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.487424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.487456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.487653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.487686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.487882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.487921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.488108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.488140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.488282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.488313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.488437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.488471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.488590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.488846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.489874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.489912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.490088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.490121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.490236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.749 [2024-12-14 22:45:31.490271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.749 qpair failed and we were unable to recover it. 00:36:10.749 [2024-12-14 22:45:31.490391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.490424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.490598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.490864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.490898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.491093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.491126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.491334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.491367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.491548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.491581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.491828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.491861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.492063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.492096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.492287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.492320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.492429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.492460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.492646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.492679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.492865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.492898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.493093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.493435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.493611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.493644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.493761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.493794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.494012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.494047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.494171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.494208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.494336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.494368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.494608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.494641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.494913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.494947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.495062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.495094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.495227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.495258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.495466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.495672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.495704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.495889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.495933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.496058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.496090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.496224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.496257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.496437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.496469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.496712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.496745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.496856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.496888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.497061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.497094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.497324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.497503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.497537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.497652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.497682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.497873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.497914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.498085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.498327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.498359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.498481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.750 [2024-12-14 22:45:31.498513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.750 qpair failed and we were unable to recover it. 00:36:10.750 [2024-12-14 22:45:31.498617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.498649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.498872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.498912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.499191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.499224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.499398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.499551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.499584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.499878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.499932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.500218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.500251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.500443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.500476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.500667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.500699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.500814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.500848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.501049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.501082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.501255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.501286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.501475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.501508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.501626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.501659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.501949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.501984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.502094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.502125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.502330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.502362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.502554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.502586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.502731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.502982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.503210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.503422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.503579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.503792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.503961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.503995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.504185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.504218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.504384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.504416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.504688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.504951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.504985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.505214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.505247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.505429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.505460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.505585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.505617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.505816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.505849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.505985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.506355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.506492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.506934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.751 [2024-12-14 22:45:31.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.751 qpair failed and we were unable to recover it. 00:36:10.751 [2024-12-14 22:45:31.507098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.507131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.507258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.507290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.507494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.507525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.507649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.507680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.507967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.508193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.508406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.508568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.508796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.508943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.508977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.509239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.509272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.509445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.509478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.509604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.509636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.509759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.509791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.509968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.510122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.510329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.510629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.510797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.510952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.510990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.511214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.511353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.511386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.511499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.511532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.511676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.511709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.511984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.512019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.512213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.512246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.512427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.512459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.512640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.512673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.512843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.512874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.513197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.513269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.513415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.513453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.513588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.513622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.513753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.513785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.513994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.514141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.514435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.514572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.514711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.514937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.752 [2024-12-14 22:45:31.514971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.752 qpair failed and we were unable to recover it. 00:36:10.752 [2024-12-14 22:45:31.515084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.515118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.515241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.515274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.515493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.515735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.515768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.515969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.516003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.516224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.516256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.516445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.516478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.516594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.516626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.516841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.516874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.517173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.517206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.517466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.517499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.517741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.517774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.518020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.518054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.518160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.518192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.518398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.518581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.518613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.518735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.518768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.519064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.519289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.519455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.519616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.519973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.520008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.520197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.520231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.520364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.520397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.520575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.520607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.520794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.520827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.521012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.521176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.521209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.521391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.521423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.521662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.521695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.521917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.521952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.522140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.522173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.522438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.753 [2024-12-14 22:45:31.522471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.753 qpair failed and we were unable to recover it. 00:36:10.753 [2024-12-14 22:45:31.522655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.522694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.522956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.523125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.523158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.523340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.523572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.523604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.523729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.523761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.523876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.523916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.524120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.524153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.524411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.524444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.524565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.524598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.524759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.524974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.525008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.525212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.525244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.525437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.525471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.525656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.525689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.525800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.525832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.526008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.526042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.526251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.526447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.526480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.526709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.526886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.526926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.527927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.527967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.528087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.528120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.528299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.528331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.528551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.528583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.528720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.528754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.528883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.528934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.529146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.529178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.529414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.529447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.529561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.529592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.529708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.529740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.530006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.530041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.530232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.530265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.530416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.754 [2024-12-14 22:45:31.530448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.754 qpair failed and we were unable to recover it. 00:36:10.754 [2024-12-14 22:45:31.530682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.530715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.530836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.530868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.531131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.531166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.531288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.531456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.531488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.531769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.531803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.531993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.532027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.532163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.532195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.532345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.532378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.532551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.532584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.532773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.532805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.533043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.533077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.533272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.533305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.533446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.533479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.533659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.533798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.533831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.534018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.534052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.534256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.534289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.534478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.534511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.534712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.534744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.534919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.534953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.535085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.535119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.535250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.535282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.535542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.535574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.535679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.535712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.535884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.535926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.536096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.536221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.536259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.536442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.536474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.536657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.536807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.536840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.537077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.537111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.537297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.537330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.537526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.537559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.537720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.537899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.538116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.538149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.538386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.538419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.538595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.538628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.755 [2024-12-14 22:45:31.538740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.755 [2024-12-14 22:45:31.538773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.755 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.538977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.539015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.539201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.539235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.539347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.539378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.539574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.539607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.539779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.539812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.540036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.540070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.540206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.540238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.540479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.540512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.540698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.540731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.541007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.541041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.541298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.541331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.541506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.541539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.541753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.541786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.542028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.542062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.542248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.542319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.542519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.542557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.542760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.542794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.542966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.543001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.543119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.543151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.543272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.543305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.543517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.543550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.543793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.543826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.544040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.544266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.544298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.544492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.544525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.544715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.544748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.544926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.544960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.545087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.545129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.545312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.545345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.545589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.545622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.545737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.545767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.545889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.545934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.546068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.546101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.546231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.546264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.546452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.546599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.546632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.546808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.546841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.547036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.547070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.756 [2024-12-14 22:45:31.547331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.756 [2024-12-14 22:45:31.547364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.756 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.547552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.547585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.547824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.547857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.548132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.548166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.548291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.548324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.548511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.548543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.548729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.548761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.548934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.549240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.549272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.549548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.549581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.549787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.549820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.550034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.550069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.550210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.550242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.550446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.550478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.550669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.550701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.550969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.551005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.551133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.551166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.551517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.551550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.551763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.551795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.551966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.552000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.552190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.552223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.552345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.552575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.552608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.552726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.552759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.553003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.553037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.553150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.553180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.553428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.553462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.553639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.553671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.553853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.553898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.554095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.554129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.554323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.554354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.554591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.554624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.554799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.554833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.555054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.555214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.555430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.555585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.555817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.555995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.757 [2024-12-14 22:45:31.556030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.757 qpair failed and we were unable to recover it. 00:36:10.757 [2024-12-14 22:45:31.556219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.556253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.556364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.556397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.556570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.556603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.556916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.556951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.557131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.557164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.557440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.557472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.557596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.557627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.557875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.557931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.558119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.558421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.558455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.558699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.558732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.558950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.558984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.559246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.559279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.559401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.559433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.559553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.559586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.559843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.559875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.560098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.560132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.560374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.560406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.560539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.560572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.560749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.560782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.560984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.561018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.561201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.561233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.561630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.561663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.561776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.561810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.561982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.562016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.562241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.562274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.562407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.562441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.562641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.562674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.562846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.562884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.563156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.563193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.563719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.563753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.563936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.563972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.758 [2024-12-14 22:45:31.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.758 [2024-12-14 22:45:31.564190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.758 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.564433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.564466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.564640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.564672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.564865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.564898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.565172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.565206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.565464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.565496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.565700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.565733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.565847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.565880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.566098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.566132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.566331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.566365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.566604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.566637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.566814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.566847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.566965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.567000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.567193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.567226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.567430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.567554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.567587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.567868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.567901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.568086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.568120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.568294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.568327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.568440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.568474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.568596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.568629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.568869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.568933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.569137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.569171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.569345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.569378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.569588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.569621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.569837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.570079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.570114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.570226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.570260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.570443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.570476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.570659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.570692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.570861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.570895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.571086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.571119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.571309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.571342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.571525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.571558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.571675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.571707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.571946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.572187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.572220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.572415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.572447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.572623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.759 [2024-12-14 22:45:31.572784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.759 [2024-12-14 22:45:31.572816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.759 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.572934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.572968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.573184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.573217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.573350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.573382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.573559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.573591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.573781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.573934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.573970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.574235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.574268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.574467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.574500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.574795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.575007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.575211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.575244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.575381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.575413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.575593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.575625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.575863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.575896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.576081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.576236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.576270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.576461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.576494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.576615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.576648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.576947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.576982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.577114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.577148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.577409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.577442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.577634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.577667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.577857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.577890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.578019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.578052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.578177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.578210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.578395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.578427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.578598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.578631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.578872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.578914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.579099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.579131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.579343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.579377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.579555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.579589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.579735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.579767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.579963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.579999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.580259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.580292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.580505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.580538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.580723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.580757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.580944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.580979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.581110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.581143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.760 [2024-12-14 22:45:31.581408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.760 [2024-12-14 22:45:31.581441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.760 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.581584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.581618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.581837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.581870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.581996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.582029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.582206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.582239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.582370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.582403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.582619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.582652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.582892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.582933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.583176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.583209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.583395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.583428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.583533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.761 [2024-12-14 22:45:31.583565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:10.761 qpair failed and we were unable to recover it. 00:36:10.761 [2024-12-14 22:45:31.583755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.040 [2024-12-14 22:45:31.583788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.040 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.583987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.584198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.584356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.584590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.584744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.584933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.584968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.585247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.585280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.585462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.585603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.585636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.585811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.585844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.586915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.586949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.587118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.587150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.587324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.587357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.587532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.587564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.587691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.587724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.587922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.587957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.588235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.588447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.588479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.588666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.588699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.588823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.588856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.589073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.589108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.589309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.589341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.589519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.589551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.589737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.589769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.589894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.589940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.590045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.590078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.590203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.590235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.590472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.590504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.041 [2024-12-14 22:45:31.590741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.041 [2024-12-14 22:45:31.590774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.041 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.591036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.591071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.591281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.591314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.591528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.591561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.591747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.591780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.591898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.591940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.592128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.592161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.592294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.592326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.592501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.592533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.592725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.592759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.592980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.593167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.593200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.593402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.593435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.593626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.593659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.593782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.593824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.594088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.594122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.594333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.594366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.594551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.594583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.594849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.595070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.595277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.595414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.595619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.595775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.595979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.596013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.596148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.596182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.596419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.596452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.596628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.596661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.596848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.596881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.597062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.597268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.597301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.597427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.597460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.597570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.597603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.597858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.597891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.598213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.598247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.598475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.598508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.598720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.598753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.598878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.598917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.599106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.599139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.042 [2024-12-14 22:45:31.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.042 [2024-12-14 22:45:31.599362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.042 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.599551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.599583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.599767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.599799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.599977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.600011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.600199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.600231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.600422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.600455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.600647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.600679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.600794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.600828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.601036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.601070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.601260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.601293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.601541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.601574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.601840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.601873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.602082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.602116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.602308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.602340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.602523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.602556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.602777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.603031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.603065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.603252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.603285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.603462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.603494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.603676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.603708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.603967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.604117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.604323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.604533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.604735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.604918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.604952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.605080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.605122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.605310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.605343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.605468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.605501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.605688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.605720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.605835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.605868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.606139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.606173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.606412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.606445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.606702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.606734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.606916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.607167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.607201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.607393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.607425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.607635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.607668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.607912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.043 [2024-12-14 22:45:31.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.043 [2024-12-14 22:45:31.608066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.043 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.608237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.608270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.608509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.608542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.608713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.608746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.608952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.608987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.609187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.609220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.609350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.609383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.609556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.609589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.609769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.609802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.610004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.610038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.610242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.610276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.610467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.610499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.610686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.610719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.610916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.610950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.611222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.611255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.611473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.611505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.611678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.611710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.611843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.611877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.611995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.612026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.612294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.612326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.612449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.612482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.612723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.612761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.612954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.612995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.613257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.613409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.613442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.613561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.613594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.613770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.613803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.613921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.613955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.614071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.614103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.614241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.614274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.614474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.614507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.614771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.614804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.615003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.615037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.615215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.615248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.615459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.615491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.615761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.615794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.615919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.615953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.616057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.616089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.616350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.616383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.044 [2024-12-14 22:45:31.616488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.044 [2024-12-14 22:45:31.616521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.044 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.616708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.616741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.616980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.617130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.617276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.617421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.617642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.617863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.617989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.618023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.618225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.618258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.618473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.618506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.618750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.618782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.618965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.618998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.619238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.619272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.619459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.619492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.619626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.619659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.619785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.619818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.620015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.620049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.620239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.620271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.620488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.620687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.620720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.620842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.620875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.621127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.621167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.621433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.621466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.621638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.621917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.621952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.622120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.622152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.622327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.622360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.622545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.622579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.622702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.622735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.622923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.622959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.623140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.623174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.623356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.623389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.045 [2024-12-14 22:45:31.623511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.045 [2024-12-14 22:45:31.623544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.045 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.623805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.624027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.624062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.624308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.624341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.624511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.624544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.624666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.624699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.624929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.624963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.625229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.625263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.625369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.625401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.625589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.625622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.625756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.625789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.625932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.626137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.626170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.626341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.626374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.626582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.626854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.626887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.627036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.627070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.627246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.627279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.627456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.627488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.627703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.627737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.627926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.627961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.628094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.628127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.628306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.628340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.628518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.628802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.628834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.629022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.629056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.629165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.629197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.629409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.629442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.629626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.629658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.629896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.629942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.630072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.630105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.630275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.630307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.630542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.630575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.630839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.046 [2024-12-14 22:45:31.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.046 [2024-12-14 22:45:31.631925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.046 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.632040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.632073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.632168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.632200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.632445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.632656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.632689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.632875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.632928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.633173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.633207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.633397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.633429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.633550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.633583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.633693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.633923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.633958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.634189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.634222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.634358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.634391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.634563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.634595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.634785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.634818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.635035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.635298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.635331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.635522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.635556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.635785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.635818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.636029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.636287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.636320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.636433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.636466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.636680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.636808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.636841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.637103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.637136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.637347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.637380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.637564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.637597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.637705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.637737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.637944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.637978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.638121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.638154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.638337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.638376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.638557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.638590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.638849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.638883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.639066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.639099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.639503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.639537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.639797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.639830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.640015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.640050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.640229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.640262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.047 [2024-12-14 22:45:31.640527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.047 [2024-12-14 22:45:31.640560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.047 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.640735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.640768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.640890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.640953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.641085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.641119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.641300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.641332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.641465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.641498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.641615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.641648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.641887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.641931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.642109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.642324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.642357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.642484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.642516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.642775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.642808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.642982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.643188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.643340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.643499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.643647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.643812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.643845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.644079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.644151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.644384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.644422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.644634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.644817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.644851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.645114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.645354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.645388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.645628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.645660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.645781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.645954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.645989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.646100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.646134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.646252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.646283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.646519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.646551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.646737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.646769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.646912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.646946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.647133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.647167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.647274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.647305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.647480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.647511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.647704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.647737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.647916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.647950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.648155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.648189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.648397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.648430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.048 [2024-12-14 22:45:31.648543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.048 [2024-12-14 22:45:31.648575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.048 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.648704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.648736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.648860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.648893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.649019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.649051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.649290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.649324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.649506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.649539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.649665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.649704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.649878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.649921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.650122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.650154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.650260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.650291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.650461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.650496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.650683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.650715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.650930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.651149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.651181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.651444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.651477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.651741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.651774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.651894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.651934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.652123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.652155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.652291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.652325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.652507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.652540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.652809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.652842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.652970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.653004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.653124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.653155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.653267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.653300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.653560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.653593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.653764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.653797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.654002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.654036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.654144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.654173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.654385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.654420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.654627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.654661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.654837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.654869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.655094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.655128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.655364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.655397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.655590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.655630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.655809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.655841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.656052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.656086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.656260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.656292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.049 [2024-12-14 22:45:31.656410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.049 [2024-12-14 22:45:31.656443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.049 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.656720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.656753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.656886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.656928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.657111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.657143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.657413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.657583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.657615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.657735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.657768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.658034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.658068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.658264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.658297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.658470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.658502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.658679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.658713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.658891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.658941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.659067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.659099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.659290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.659324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.659509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.659734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.659766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.660020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.660055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.660194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.660226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.660514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.660546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.660720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.660752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.660877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.660935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.661915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.661949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.662196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.662229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.662543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.662576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.662794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.662827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.663021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.663056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.663187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.663220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.663424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.663456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.663665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.663697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.663801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.663832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.664012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.664047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.664229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.664263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.664393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.050 [2024-12-14 22:45:31.664430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.050 qpair failed and we were unable to recover it. 00:36:11.050 [2024-12-14 22:45:31.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.664579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.664683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.664715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.664848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.664879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.665894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.665936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.666122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.666155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.666293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.666326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.666441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.666474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.666645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.666859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.666892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.667092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.667126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.667343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.667375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.667496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.667528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.667660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.667693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.667881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.667924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.668193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.668226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.668354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.668387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.668580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.668612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.668856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.669003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.669037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.669279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.669313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.669488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.669521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.669676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.669861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.670162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.670195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.670322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.670356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.670640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.670673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.670803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.670836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.671078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.671113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.671283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.671317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.671509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.671541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.671731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.671763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.672029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.672064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.672269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.672476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.672649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.051 [2024-12-14 22:45:31.672681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.051 qpair failed and we were unable to recover it. 00:36:11.051 [2024-12-14 22:45:31.672820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.672854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.672985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.673017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.673204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.673426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.673460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.673658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.673691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.673796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.673827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.674078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.674113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.674376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.674409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.674589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.674621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.674882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.674924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.675063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.675097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.675270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.675303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.675445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.675478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.675749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.675878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.675920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.676097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.676130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.676319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.676353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.676534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.676567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.676753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.676786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.677062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.677098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.677285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.677318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.677545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.677578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.677753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.677786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.678054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.678089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.678196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.678229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.678411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.678444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.678629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.678662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.678932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.678972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.679153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.679186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.679294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.679327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.679522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.679726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.679758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.679945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.679979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.680168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.680202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.680318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.680350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.680638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.680672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.680948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.680982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.681171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.681205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.681376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.681409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.052 [2024-12-14 22:45:31.681675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.052 [2024-12-14 22:45:31.681708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.052 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.681922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.681957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.682090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.682124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.682306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.682339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.682521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.682554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.682761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.682794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.682966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.683001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.683174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.683207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.683335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.683368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.683586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.683619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.683819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.683852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.684878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.684918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.685093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.685127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.685252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.685284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.685458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.685490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.685743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.685775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.685974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.686009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.686116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.686393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.686425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.686691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.686725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.686900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.686943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.687115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.687147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.687364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.687397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.687657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.687690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.687834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.687868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.688001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.688035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.688238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.688272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.688464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.688496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.688677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.688710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.688888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.688943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.689183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.689216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.689322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.689356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.689544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.689741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.689945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.053 [2024-12-14 22:45:31.689980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.053 qpair failed and we were unable to recover it. 00:36:11.053 [2024-12-14 22:45:31.690153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.690186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.690418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.690450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.690641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.690674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.690920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.690954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.691130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.691163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.691400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.691433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.691618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.691650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.691831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.691864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.691999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.692032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.692269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.692302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.692476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.692508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.692680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.692713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.692900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.692958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.693141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.693172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.693364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.693498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.693532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.693729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.693768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.694011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.694046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.694337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.694369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.694552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.694585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.694715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.694747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.694991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.695027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.695291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.695324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.695523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.695556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.695668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.695701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.695946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.695981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.696159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.696192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.696326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.696359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.696542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.696575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.696774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.696806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.697052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.697087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.697277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.697309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.697573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.697606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.697800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.697832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.698075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.054 [2024-12-14 22:45:31.698108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.054 qpair failed and we were unable to recover it. 00:36:11.054 [2024-12-14 22:45:31.698226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.698258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.698437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.698470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.698707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.698740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.698984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.699019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.699304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.699336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.699524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.699557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.699747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.699781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.700054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.700089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.700292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.700499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.700532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.700724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.700756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.700888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.700951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.701149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.701182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.701369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.701400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.701637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.701671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.701776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.701808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.701981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.702015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.702211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.702243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.702661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.702694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.702898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.702952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.703075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.703108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.703305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.703343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.703530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.703565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.703834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.703866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.704141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.704175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.704359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.704393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.704573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.704605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.704775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.704896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.704964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.705157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.705191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.705435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.705468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.705718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.705751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.705859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.705891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.706034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.706067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.706236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.706269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.706384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.706418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.706622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.706654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.706827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.706859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.055 [2024-12-14 22:45:31.707057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.055 [2024-12-14 22:45:31.707091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.055 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.707269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.707302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.707474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.707507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.707681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.707715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.707910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.707951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.708193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.708226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.708401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.708432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.708704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.708738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.708942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.708977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.709165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.709197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.709337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.709374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.709497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.709531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.709710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.709744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.709887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.710115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.710147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.710385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.710418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.710707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.710740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.710916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.710949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.711077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.711110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.711235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.711441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.711473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.711593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.711627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.711815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.711848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.712102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.712328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.712361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.712536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.712757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.712788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.713010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.713045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.713259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.713292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.713481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.713514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.713684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.713717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.713852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.713884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.714008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.714041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.714156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.714188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.714444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.714477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.714603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.714636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.714809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.714841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.715079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.715114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.715359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.715393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.715527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.056 [2024-12-14 22:45:31.715565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.056 qpair failed and we were unable to recover it. 00:36:11.056 [2024-12-14 22:45:31.715870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.715913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.716037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.716068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.716335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.716367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.716490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.716521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.716715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.716748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.716939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.716972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.717173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.717206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.717340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.717371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.717491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.717523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.717780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.717812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.717933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.717968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.718144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.718182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.718376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.718408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.718579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.718611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.718793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.718825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.719068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.719103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.719295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.719329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.719459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.719492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.719689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.719720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.719891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.719936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.720055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.720088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.720259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.720290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.720534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.720693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.720725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.720898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.720962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.721104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.721137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.721250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.721283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.721457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.721489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.721660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.721692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.721868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.721901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.722892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.722937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.723221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.723254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.723496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.723529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.723641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.723855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.057 [2024-12-14 22:45:31.723887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.057 qpair failed and we were unable to recover it. 00:36:11.057 [2024-12-14 22:45:31.724072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.724105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.724277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.724310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.724548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.724582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.724760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.724792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.724951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.724986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.725176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.725209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.725388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.725420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.725543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.725574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.725763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.725796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.725961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.726149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.726291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.726323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.726591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.726662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.726809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.726845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.727043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.727078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.727259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.727291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.727512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.727642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.727674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.727922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.727956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.728150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.728183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.728317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.728349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.728532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.728565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.728839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.728872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.729067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.729100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.729376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.729598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.729640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.729820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.729853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.729984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.730018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.730255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.730287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.730461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.730494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.730758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.730790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.730943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.730978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.731157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.731189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.731312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.731345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.058 qpair failed and we were unable to recover it. 00:36:11.058 [2024-12-14 22:45:31.731516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.058 [2024-12-14 22:45:31.731548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.731719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.731751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.731933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.731967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.732153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.732185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.732390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.732423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.732678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.732712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.732890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.732932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.733117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.733150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.733389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.733422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.733543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.733685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.733718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.733921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.733956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.734217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.734249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.734455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.734488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.734674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.734707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.734974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.735009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.735248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.735281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.735487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.735521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.735714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.735748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.735879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.736111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.736145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.736337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.736370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.736660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.736693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.736929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.736963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.737247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.737279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.737467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.737499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.737694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.737726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.737922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.737957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.738174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.738207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.738334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.738367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.738627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.738661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.738770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.738808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.739024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.739291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.739324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.739518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.739550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.739734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.739768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.739984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.740152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.740185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.740425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.740458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.059 qpair failed and we were unable to recover it. 00:36:11.059 [2024-12-14 22:45:31.740696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.059 [2024-12-14 22:45:31.740729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.741081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.741114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.741218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.741250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.741505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.741537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.741710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.741743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.741942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.741976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.742095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.742126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.742330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.742362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.742533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.742565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.742671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.742701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.742881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.742926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.743164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.743198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.743381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.743413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.743602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.743634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.743765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.743798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.743996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.744029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.744216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.744248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.744441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.744474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.744688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.744722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.745069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.745103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.745276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.745308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.745569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.745601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.745800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.745833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.745957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.745990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.746238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.746272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.746495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.746528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.746645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.746678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.746787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.746817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.747098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.747132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.747310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.747341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.747521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.747559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.747740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.747773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.747911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.747945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.748130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.748163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.748285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.748317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.748423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.748456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.748704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.748737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.060 [2024-12-14 22:45:31.749000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.060 [2024-12-14 22:45:31.749035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.060 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.749156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.749188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.749366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.749398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.749582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.749614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.749856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.749888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.750121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.750275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.750504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.750651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.750787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.750976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.751114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.751327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.751481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.751700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.751889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.752019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.752051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.752242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.752275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.752482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.752515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.752727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.752760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.753087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.753209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.753245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.753452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.753486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.753672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.753713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.753819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.753851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.754105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.754138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.754255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.754287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.754467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.754499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.754752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.754784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.754974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.755117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.755383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.755624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.755787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.755958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.755991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.756257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.756291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.756422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.756455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.756574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.756606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.756793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.756831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.757006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.061 [2024-12-14 22:45:31.757041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.061 qpair failed and we were unable to recover it. 00:36:11.061 [2024-12-14 22:45:31.757282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.757316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.757555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.757587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.757759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.757797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.757980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.758015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.758154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.758186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.758441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.758474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.758652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.758684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.758873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.758925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.759111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.759144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.759415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.759449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.759564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.759597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.759863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.759897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.760126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.760160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.760335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.760367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.760657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.760690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.760921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.761139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.761171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.761287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.761318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.761493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.761702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.761735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.761977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.762139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.762398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.762554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.762782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.762953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.762988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.763118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.763151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.763332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.763363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.763480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.763513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.763779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.763813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.763993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.764028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.764149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.764182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.764370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.764403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.764609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.764646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.764761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.764794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.764977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.765201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.765233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.765402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.765436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.062 [2024-12-14 22:45:31.765691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.062 [2024-12-14 22:45:31.765725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.062 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.765855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.765888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.766084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.766119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.766367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.766400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.766598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.766632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.766747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.766780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.766966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.767129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.767166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.767344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.767569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.767606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.767826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.767862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.768121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.768381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.768414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.768591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.768626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.768839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.768875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.769150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.769184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.769364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.769401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.769608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.769648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.769768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.769801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.769992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.770036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.770212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.770494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.770527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.770770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.770808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.771084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.771121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.771385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.771428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.771606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.771641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.771806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.772024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.772059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.772264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.772299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.772482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.772518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.772698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.772735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.772990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.773028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.773214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.773250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.773414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.773687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.773724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.063 [2024-12-14 22:45:31.773926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.063 [2024-12-14 22:45:31.773962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.063 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.774092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.774126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.774303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.774346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.774537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.774570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.774748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.774782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.774943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.774978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.775098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.775132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.775350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.775384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.775583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.775616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.775807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.775840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.776091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.776359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.776392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.776508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.776540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.776735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.776770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.776895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.776937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.777177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.777211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.777428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.777562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.777596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.777776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.777810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.777949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.777983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.778154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.778314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.778347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.778613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.778646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.779108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.779150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.779374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.779613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.779823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.779859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.780044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.780078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.780214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.780247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.780429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.780463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.780656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.780898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.780944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.781120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.781153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.781277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.781310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.781571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.781605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.781797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.781829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.782073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.782108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.782213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.782416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.782449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.782637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.064 [2024-12-14 22:45:31.782671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.064 qpair failed and we were unable to recover it. 00:36:11.064 [2024-12-14 22:45:31.782799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.782833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.783020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.783055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.783174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.783206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.783401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.783440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.783552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.783845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.783878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.784076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.784110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.784345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.784379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.784507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.784540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.784722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.784755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.784941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.784975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.785170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.785204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.785384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.785417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.785542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.785575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.785818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.785851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.786051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.786087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.786351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.786385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.786575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.786608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.786788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.786820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.787001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.787037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.787295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.787506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.787723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.787756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.787877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.787918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.788935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.788969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.789162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.789200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.789461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.789495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.789615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.789648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.789844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.789877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.790060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.790095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.790227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.790260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.790532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.790669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.790702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.790817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.790849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.065 [2024-12-14 22:45:31.791040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.065 [2024-12-14 22:45:31.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.065 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.791251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.791284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.791497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.791530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.791744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.791777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.791960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.791995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.792229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.792299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.792496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.792532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.792649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.792682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.792868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.792899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.793136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.793170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.793347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.793379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.793576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.793607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.793793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.793825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.794098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.794133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.794272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.794305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.794570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.794602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.794788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.794821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.795004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.795040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.795243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.795284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.795523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.795557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.795730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.795951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.795984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.796245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.796485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.796517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.796764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.796797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.796928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.796965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.797226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.797259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.797444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.797477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.797593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.797626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.797746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.797958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.797993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.798296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.798483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.798516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.798633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.798664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.798864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.798896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.799021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.799055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.799173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.799205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.799401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.799432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.799551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.799581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.066 [2024-12-14 22:45:31.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.066 [2024-12-14 22:45:31.799738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.066 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.799941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.799973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.800091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.800124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.800296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.800329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.800454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.800487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.800729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.800761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.801058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.801091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.801291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.801324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.801438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.801650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.801946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.801980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.802869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.802926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.803112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.803146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.803349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.803387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.803495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.803526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.803643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.803675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.803868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.803912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.804103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.804135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.804306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.804337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.804615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.804806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.804837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.804953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.804989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.805182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.805215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.805441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.805547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.805579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.805700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.805733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.805914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.805947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.806128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.806160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.806423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.067 [2024-12-14 22:45:31.806455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.067 qpair failed and we were unable to recover it. 00:36:11.067 [2024-12-14 22:45:31.806647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.806678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.806853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.806886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.807018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.807051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.807169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.807201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.807474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.807505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.807743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.807774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.807948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.808133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.808166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.808358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.808392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.808583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.808616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.808878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.808918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.809117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.809188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.809457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.809495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.809678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.809711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.809892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.809948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.810210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.810242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.810357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.810391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.810634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.810667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.810842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.810874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.810997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.811030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.811229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.811456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.811489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.811702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.811735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.811928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.811963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.812233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.812275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.812516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.812549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.812720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.812754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.812944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.812978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.813159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.813192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.813329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.813363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.813605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.813637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.813840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.813873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.814097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.814131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.814249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.814282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.814463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.814496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.814668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.814702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.814918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.815093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.815274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.068 [2024-12-14 22:45:31.815308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.068 qpair failed and we were unable to recover it. 00:36:11.068 [2024-12-14 22:45:31.815433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.815466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.815656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.815689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.815889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.815934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.816126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.816159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.816362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.816395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.816634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.816667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.816854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.816887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.817139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.817173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.817385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.817417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.817610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.817643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.817859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.817893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.818027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.818061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.818402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.818474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.818690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.818727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.818927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.818963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.819180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.819214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.819334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.819367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.819549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.819582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.819791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.820028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.820063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.820259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.820294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.820478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.820511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.820749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.820782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.820913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.820948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.821186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.821220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.821424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.821550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.821584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.821851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.821885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.822088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.822123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.822331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.822363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.822543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.822576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.822697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.822730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.822959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.822999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.823187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.823220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.823426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.823459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.823636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.823669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.823855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.824029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.824062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.824253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.824285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.069 [2024-12-14 22:45:31.824415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.069 [2024-12-14 22:45:31.824459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.069 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.824666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.824699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.824877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.824919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.825148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.825181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.825422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.825456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.825638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.825670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.825856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.825889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.826084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.826118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.826330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.826363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.826539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.826571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.826745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.826777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.827001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.827140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.827173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.827297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.827329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.827467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.827501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.827761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.827794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.827969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.828004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.828327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.828610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.828830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.828864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.829065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.829100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.829340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.829373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.829571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.829604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.829743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.829776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.829949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.829984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.830120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.830153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.830427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.830622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.830655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.830912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.830946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.831069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.831103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.831222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.831255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.831392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.831425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.831544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.831577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.831837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.831869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.832061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.832336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.832370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.832566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.832599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.832772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.832804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.833083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.833118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.833245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.833278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-14 22:45:31.833389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-14 22:45:31.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.833611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.833645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.833773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.834000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.834034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.834226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.834259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.834377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.834410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.834597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.834631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.834872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.834913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.835134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.835403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.835436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.835697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.835730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.835848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.835881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.836124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.836158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.836339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.836372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.836642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.836675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.836811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.836844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.836987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.837146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.837362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.837508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.837654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.837859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.837892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.838083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.838117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.838379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.838411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.838665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.838698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.838884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.838927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.839128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.839161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.839294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.839326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.839449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.839487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.839690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.839846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.839879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.840130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.840164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.840285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.840318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.840605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.840763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.840796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.840975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.841009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.841131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.841164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.841429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.841651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.841683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.841928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-14 22:45:31.841963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-14 22:45:31.842095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.842128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.842306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.842339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.842462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.842495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.842710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.842743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.842862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.842895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.843082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.843115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.843299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.843332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.843592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.843625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.843799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.844017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.844051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.844230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.844263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.844480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.844513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.844621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.844654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.844848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.844881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.845053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.845304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.845337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.845534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.845567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.845688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.845721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.845990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.846152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.846361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.846522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.846730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.846954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.846989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.847143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.847358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.847391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.847629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.847662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.847769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.847802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.847975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.848010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.848254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.848293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.848478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.848511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.848633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-14 22:45:31.848666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-14 22:45:31.848932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.848967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.849232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.849266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.849480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.849513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.849730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.849763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.849958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.849993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.850208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.850241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.850456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.850490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.850667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.850701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.850834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.851061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.851095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.851212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.851245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.851388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.851421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.851610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.851643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.851915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.851949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.852131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.852164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.852335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.852367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.852541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.852573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.852750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.852784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.852971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.853182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.853330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.853543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.853693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.853901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.853946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.854151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.854190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.854431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.854464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.854591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.854624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.854752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.854786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.854966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.855000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.855180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.855214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.855428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.855461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.855725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.855759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.855949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.855983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.856162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.856195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.856370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.856403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.856612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.856855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.856888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.857011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.857044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.857245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.857277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-14 22:45:31.857494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-14 22:45:31.857527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.857646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.857680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.857862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.857895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.858173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.858207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.858345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.858378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.858662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.858793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.858826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.858959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.858994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.859109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.859141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.859380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.859413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.859664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.859697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.859873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.859913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.860047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.860080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.860360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.860537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.860569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.860704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.860737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.860999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.861218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.861426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.861590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.861737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.861962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.861996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.862205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.862238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.862431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.862465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.862668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.862701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.862879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.862923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.863062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.863100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.863363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.863397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.863570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.863603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.863735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.863768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.863954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.863989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.864162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.864195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.864322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.864355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.864621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.864654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.864891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.864933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.865061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.865094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.865217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.865250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.865364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.865397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.865634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.865847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.865879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-14 22:45:31.866160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-14 22:45:31.866195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.866450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.866483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.866611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.866643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.866881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.866923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.867031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.867065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.867312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.867345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.867483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.867516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.867707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.867742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.867982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.868016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.868195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.868228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.868402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.868435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.868617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.868648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.868884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.868925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.869237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.869282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.869462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.869495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.869668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.869700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.869872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.869922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.870110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.870143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.870335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.870369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.870672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.870821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.870854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.870973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.871354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.871533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.871748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.871956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.871990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.872212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.872545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.872616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.872824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.872860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.873117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.873153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.873328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.873361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.873543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.873576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.873692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.873725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.873850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.873883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.874082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.874114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.874363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.874396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.874589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.874622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.874740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.874773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.874965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-14 22:45:31.874999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-14 22:45:31.875235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.875278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.875479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.875513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.875695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.875726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.875833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.875865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.876163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.876198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.876334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.876366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.876542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.876575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.876747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.876779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.876975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.877195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.877399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.877547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.877939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.877972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.878097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.878131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.878261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.878293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.878497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.878530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.878710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.878743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.878879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.878922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.879053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.879086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.879261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.879292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.879466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.879500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.879749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.879782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.880021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.880056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.880167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.880200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.880446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.880478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.880687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.880720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.880943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.880988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.881255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.881290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.881478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.881511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.881698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.881732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.881991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.882027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.882296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.882329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.882518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.882551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-14 22:45:31.882746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-14 22:45:31.882779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.882967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.883002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.883193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.883226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.883349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.883523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.883556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.883751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.883785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.884065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.884110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.884290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.884507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.884539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.884656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.884688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.884878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.884919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.885095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.885128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.885250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.885282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.885553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.885586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.885709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.885741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.885943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.885978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.886173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.886204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.886394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.886427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.886665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.886698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.886826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.886859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.888822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.888880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.889193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.889230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.889418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.889713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.889747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.889957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.889992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.890186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.890219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.890412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.890445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.890648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.890680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.890867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.890899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.891032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.891066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.891253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.891285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.891494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.891527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.891717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.891751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.891889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.891931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.892053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.892085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.892202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.892237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.892408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.892441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.892617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.892650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.892890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.892933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-14 22:45:31.893057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-14 22:45:31.893091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.893389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.893506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.893539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.893717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.893748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.893929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.893964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.894163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.894196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.894377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.894410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.894634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.894757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.894790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.895025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.895059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.895288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.895391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.895422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.895597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.895631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.895758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.895790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.896921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.896955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.897171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.897203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.897400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.897434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.897624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.897658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.897774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.897993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.898028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.898244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.898504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.898536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.898712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.898745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.898874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.898927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.899110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.899142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.899380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.899412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.899537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.899569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.899846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.899879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.900064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.900098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.900359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.900431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.900634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.900671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.900869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.900928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.901060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.901093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.901275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.901308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-14 22:45:31.901489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-14 22:45:31.901522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.901647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.901680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.901869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.901916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.902110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.902144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.902339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.902372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.902504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.902537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.902715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.902749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.902928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.902963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.903210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.903243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-14 22:45:31.903387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-14 22:45:31.903421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.903605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.903639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.903766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.903801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.903993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.904168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.904387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.904573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.904723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.904873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.904918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.905094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.905127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.905294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.905328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.905502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.905535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.905648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.905680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.905808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.905847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.906095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.906129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.906307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.906339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.906531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.906565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.906752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.906785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.906899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.907151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.907185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.907309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.907342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.907448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.907481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.907667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.361 [2024-12-14 22:45:31.907700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.361 qpair failed and we were unable to recover it. 00:36:11.361 [2024-12-14 22:45:31.907880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.907930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.908107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.908141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.908407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.908441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.908686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.908720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.908871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.908916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.909199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.909312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.909345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.909519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.909552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.909691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.909723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.909951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.910158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.910192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.910341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.910523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.910557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.910734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.910766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.911009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.911044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.911236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.911269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.911444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.911478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.911586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.911620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.911812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.911845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.912026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.912061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.912196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.912228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.912412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.912445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.912632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.912665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.912847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.912880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.913047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.913156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.913189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.913362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.913396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.913574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.913735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.913767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.914028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.914064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.914267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.914300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.914532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.914605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.914751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.914789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.915880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.915927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.362 qpair failed and we were unable to recover it. 00:36:11.362 [2024-12-14 22:45:31.916057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.362 [2024-12-14 22:45:31.916091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.916213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.916247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.916382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.916593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.916628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.916749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.916783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.916895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.916938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.917194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.917227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.917424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.917458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.917641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.917674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.917808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.917841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.917954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.917989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.918130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.918163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.918344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.918376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.918589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.918623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.918849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.919053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.919087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.919277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.919310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.919553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.919589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.919705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.919742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.919939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.919975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.920154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.920188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.920363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.920396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.920584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.920618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.920745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.920777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.920962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.920997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.921187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.921220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.921425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.921458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.921653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.921686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.921816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.921849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.922027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.922142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.922176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.922384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.922418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.922677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.922715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.922841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.922874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.923059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.923219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.923443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.923657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.923807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.363 [2024-12-14 22:45:31.923995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.363 [2024-12-14 22:45:31.924030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.363 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.924207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.924240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.924372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.924406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.924655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.924689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.924861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.924893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.925080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.925114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.925288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.925320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.925446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.925479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.925608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.925641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.925887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.925936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.926071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.926104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.926301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.926334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.926459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.926494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.926693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.926727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.926847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.926880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.927132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.927167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.927291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.927324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.927497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.927530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.927663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.927697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.927873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.927931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.928047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.928086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.928268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.928302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.928504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.928538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.928721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.928753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.928885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.928930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.929097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.929235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.929386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.929590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.929815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.929981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.930016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.930193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.930226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.930413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.930445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.930570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.930796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.930829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.931024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.931060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.931304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.931337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.931580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.931613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.931737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.931771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.931890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.364 [2024-12-14 22:45:31.931932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.364 qpair failed and we were unable to recover it. 00:36:11.364 [2024-12-14 22:45:31.932037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.932255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.932405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.932565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.932710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.932945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.932981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.933159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.933193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.933297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.933330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.933439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.933474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.933614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.933647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.933756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.933789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.934028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.934063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.934238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.934271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.934511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.934669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.934702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.934875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.935114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.935148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.935323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.935357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.935535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.935569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.935791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.935918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.935952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.936075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.936112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.936287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.936321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.936427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.936460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.937943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.937999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.938268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.938302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.938412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.938446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.938639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.938674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.938881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.938928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.939102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.939136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.939343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.939377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.939495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.939723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.939756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.939885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.365 [2024-12-14 22:45:31.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.365 qpair failed and we were unable to recover it. 00:36:11.365 [2024-12-14 22:45:31.940239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.940494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.940677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.940710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.940840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.940873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.941126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.941160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.941272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.941452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.941485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.941696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.941730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.941858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.942036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.942069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.942266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.942299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.942422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.942455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.942640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.942673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.942959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.942994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.943139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.943178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.943417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.943450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.943698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.943733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.943948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.944082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.944115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.944402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.944435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.944546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.944579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.944703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.944737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.944928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.944962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.945088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.945121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.945239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.945467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.945500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.945715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.945748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.945855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.946073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.946206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.946239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.946511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.946544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.946651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.946684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.946813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.946847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.947898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.947962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.366 [2024-12-14 22:45:31.948140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.366 [2024-12-14 22:45:31.948174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.366 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.948302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.948335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.948459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.948492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.948737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.948769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.948894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.948940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.949129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.949162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.949335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.949367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.949544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.949578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.949692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.949919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.949953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.950161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.950194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.950399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.950432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.950556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.950588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.950800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.950833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.951950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.951984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.952165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.952199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.952315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.952348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.952473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.952505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.952712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.952745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.952861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.952894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.953022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.953055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.953185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.953217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.953349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.953382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.953500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.953533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.955351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.955411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.955719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.955755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.955946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.955983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.956191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.956223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.956350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.956384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.956489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.956522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.956645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.956678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.956786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.956819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.957226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.957256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.957434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.957468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.367 qpair failed and we were unable to recover it. 00:36:11.367 [2024-12-14 22:45:31.957590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.367 [2024-12-14 22:45:31.957623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.957745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.957779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.957953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.957987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.958263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.958300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.958512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.958637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.958784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.958815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.958959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.958990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.959126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.959156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.959335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.959366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.959472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.960795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.961133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.961167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.961287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.961317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.962924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.962976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.963955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.963987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.964157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.964187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.964290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.964320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.964514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.964547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.964775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.964915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.964946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.965056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.965086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.965260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.965292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.965420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.965453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.965592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.965626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.965877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.965918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.966073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.966243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.966386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.966545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.966772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.966996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.967028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.967215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.967243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.967356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.967383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.368 [2024-12-14 22:45:31.967579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.368 qpair failed and we were unable to recover it. 00:36:11.368 [2024-12-14 22:45:31.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.967726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.967849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.967878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.968103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.968176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.968386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.968423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.968611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.968655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.968769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.968803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.968980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.969965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.969999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.970243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.970276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.970470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.970502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.970627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.970660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.970767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.970800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.971140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.971296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.971575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.971783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.972003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.972038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.972217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.972249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.972358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.972391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.972573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.972606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.972715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.972749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.974928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.974963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.975076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.975309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.975341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.975516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.975743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.975776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.976043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.976072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.369 [2024-12-14 22:45:31.976248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.369 [2024-12-14 22:45:31.976275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.369 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.976371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.976397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.976563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.976592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.976715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.976744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.976856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.976884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.977937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.977965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.978878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.978948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.979151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.979300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.979497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.979857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.979978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.980008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.980262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.980291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.980525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.980555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.980668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.980698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.980887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.980929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.981165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.981194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.981338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.981512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.981543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.981656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.981685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.981924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.981956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.982211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.982241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.982373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.982404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.370 [2024-12-14 22:45:31.982600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.370 qpair failed and we were unable to recover it. 00:36:11.370 [2024-12-14 22:45:31.982770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.982799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.982944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.982976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.983893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.983934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.984847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.984876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.985109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.985179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.985386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.985422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.985549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.985583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.985697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.985729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.985843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.985874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.986112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.986281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.986424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.986638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.986847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.987014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.987268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.987303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.987498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.987528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.987641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.987670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.987838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.987867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.988104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.988137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.988325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.988358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.988479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.988512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.988744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.988857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.988889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.989081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.989115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.989375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.989407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.989816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.989849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.990032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.990065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.371 [2024-12-14 22:45:31.990335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.371 [2024-12-14 22:45:31.990367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.371 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.990557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.990588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.990705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.990737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.990867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.990900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.991043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.991077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.991255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.991289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.991495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.991528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.991770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.991938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.991973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.992098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.992131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.992369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.992402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.992593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.992626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.992738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.992771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.992894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.992942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.993882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.993923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.994035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.994068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.994243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.994277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.994395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.994427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.994618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.994651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.994834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.994866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.995078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.995112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.995289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.995322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.995567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.995602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.995729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.995761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.995875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.995932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.996947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.997103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.997135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.997315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.997347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.997523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.997555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.997665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.997697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.372 qpair failed and we were unable to recover it. 00:36:11.372 [2024-12-14 22:45:31.997818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.372 [2024-12-14 22:45:31.997850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.997988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.998021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.998151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.998301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.998334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.998600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.998632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.998839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.998873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.999044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.999079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.999206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.999238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.999428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.999461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.999648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.999681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:31.999865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:31.999897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.000103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.000136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.000246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.000278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.000475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.000507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.000699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.000738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.000926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.000961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.001135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.001168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.001373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.001535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.001567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.001751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.001784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.001961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.001996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.002113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.002338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.002370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.002542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.002575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.002696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.002967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.003003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.003195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.003228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.003413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.003446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.003622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.003656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.003839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.003871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.004077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.004112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.004237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.004270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.004388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.004420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.004615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.004647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.004762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.004794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.005047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.005202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.005503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.373 [2024-12-14 22:45:32.005652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.373 qpair failed and we were unable to recover it. 00:36:11.373 [2024-12-14 22:45:32.005823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.005855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.005975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.006879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.006998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.007210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.007351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.007553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.007781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.007954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.007989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.008112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.008146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.008264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.008296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.008520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.008591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.010054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.010111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.010268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.010301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.010492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.010526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.010656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.010689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.010813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.010845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.011071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.011223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.011392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.011596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.011815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.011993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.012150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.012321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.012489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.012707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.012850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.013063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.013208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.013241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.374 [2024-12-14 22:45:32.013377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.374 [2024-12-14 22:45:32.013409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.374 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.013610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.013644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.013763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.013795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.013915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.013949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.014065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.014098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.015877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.015949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.016166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.016201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.016323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.016356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.016589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.016623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.016795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.017041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.017201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.017358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.017637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.017857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.017976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.018010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.018181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.018213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.019568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.019620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.019882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.019929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.020176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.020210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.020324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.020357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.020547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.020580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.020718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.020751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.020865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.020897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.021961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.021994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.022146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.022269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.022302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.022478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.022510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.022642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.022675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.022852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.022890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.023048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.023193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.023330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.023554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.375 [2024-12-14 22:45:32.023855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.375 qpair failed and we were unable to recover it. 00:36:11.375 [2024-12-14 22:45:32.023991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.024233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.024447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.024584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.024737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.024986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.025163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.025195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.025339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.025456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.025490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.025604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.025634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.025809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.025840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.026945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.026981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.027092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.027125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.027249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.027282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.027389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.027422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.027547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.027579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.027734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.027805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.028944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.029105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.029141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.029350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.029383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.029502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.029535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.029707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.029741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.029884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.030097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.030314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.030491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.030800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.376 qpair failed and we were unable to recover it. 00:36:11.376 [2024-12-14 22:45:32.030976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.376 [2024-12-14 22:45:32.031011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.031180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.031358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.031391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.031578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.031612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.031788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.031821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.031938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.031972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.032148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.032391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.032424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.032605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.032638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.032757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.032793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.032989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.033031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.033217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.033249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.033493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.033526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.033639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.033672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.033779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.033811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.034115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.034253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.034395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.034548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.034789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.034974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.035193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.035349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.035526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.035744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.035901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.036123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.036157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.036359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.036537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.036569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.036694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.036726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.036863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.036896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.037924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.037958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.038133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.038167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.038281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.038315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.038436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.377 [2024-12-14 22:45:32.038474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.377 qpair failed and we were unable to recover it. 00:36:11.377 [2024-12-14 22:45:32.038595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.038629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.038744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.038777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.038899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.038946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.039961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.039996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.040136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.040383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.040550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.040692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.040829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.040967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.041172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.041319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.041481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.041628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.041839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.041871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.042954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.043860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.043980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.044812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.044990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.045028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.378 [2024-12-14 22:45:32.045203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.378 [2024-12-14 22:45:32.045234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.378 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.045426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.045458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.045647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.045679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.045796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.045828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.045951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.045986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.046091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.046124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.046315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.046348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.046458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.046491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.046652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.046841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.046874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.047940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.047975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.048893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.048935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.049177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.049211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.049324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.049356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.049566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.049599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.049714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.049746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.049867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.049938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.050961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.050995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.051110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.379 [2024-12-14 22:45:32.051144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.379 qpair failed and we were unable to recover it. 00:36:11.379 [2024-12-14 22:45:32.051317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.051350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.051559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.051743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.051777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.051891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.051936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.052061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.052094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.052219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.052252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.052390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.052587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.052619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.052809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.052849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.053857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.053890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.054965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.054999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.055111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.055144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.055268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.055301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.055413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.055445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.055628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.055661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.055834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.055867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.056041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.056275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.056486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.056689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.056854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.056995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.057866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.057898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.058105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.380 [2024-12-14 22:45:32.058137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.380 qpair failed and we were unable to recover it. 00:36:11.380 [2024-12-14 22:45:32.058252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.058285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.058393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.058427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.058560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.058590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.058691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.058721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.058893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.058936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.059879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.059917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.060879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.060919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.061863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.062921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.062951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.063421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.063547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.063673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.063875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.063998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.064142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.064339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.064475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.064607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.381 [2024-12-14 22:45:32.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.381 [2024-12-14 22:45:32.064847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.381 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.065847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.065876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.066939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.066973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.067086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.067118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.067226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.067257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.067464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.067497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.067671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.067703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.067828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.067861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.068880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.068923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.069115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.069148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.069332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.069365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.069537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.069568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.069691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.069724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.069857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.069890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.070834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.070871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.071003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.071038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.071177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.071352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.071384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.071555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.071587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.382 [2024-12-14 22:45:32.071764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.382 [2024-12-14 22:45:32.071796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.382 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.071927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.071962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.072927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.072961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.073074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.073106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.073219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.073252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.073372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.073404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.073668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.073701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.073949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.074179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.074212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.074336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.074369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.074523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.074625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.074656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.074837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.074870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.075040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.075113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.075337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.075374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.075554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.075587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.075759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.075792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.076005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.076146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.076179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.076364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.076397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.076583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.076615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.076810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.076842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.077040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.077074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.078493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.078548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.078741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.078776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.078959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.078995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.079109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.079141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.079377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.079410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.079603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.079636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.079821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.079854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.383 [2024-12-14 22:45:32.080061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.383 [2024-12-14 22:45:32.080095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.383 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.080366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.080399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.080522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.080555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.080810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.080843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.080972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.081146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.081384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.081530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.081679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.081898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.081944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.082081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.082114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.082232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.082265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.082513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.082547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.082769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.082941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.083154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.083320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.083529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.083676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.083827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.083858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.084008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.084042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.084164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.084197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.084319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.084351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.084531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.084564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.084795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.084829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.085942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.086151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.086183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.086357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.086390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.086507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.086539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.086656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.086688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.086872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.086915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.087047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.087081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.087278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.087311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.087454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.087639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.384 [2024-12-14 22:45:32.087671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.384 qpair failed and we were unable to recover it. 00:36:11.384 [2024-12-14 22:45:32.087853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.087886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.088919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.088953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.089082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.089115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.089241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.089273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.089445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.089478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.089738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.089771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.089971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.090007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.090113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.090145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.090389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.090421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.090571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.090624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.090820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.090854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.091054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.091088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.091215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.091248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.091378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.091410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.091534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.091566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.091699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.091732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.092950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.092985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.093163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.093195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.093394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.093427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.093560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.093594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.093836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.093869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.094952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.094989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.095173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.095206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.095400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.095433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.385 qpair failed and we were unable to recover it. 00:36:11.385 [2024-12-14 22:45:32.095606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.385 [2024-12-14 22:45:32.095638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.095766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.095798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.096073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.096296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.096504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.096717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.096855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.096973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.097183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.097215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.097410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.097442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.097719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.097753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.097875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.097914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.098152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.098279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.098311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.098568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.098601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.098744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.098899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.099037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.099071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.099337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.099370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.099485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.099518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.099687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.099722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.099849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.099882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.100897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.100941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.101116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.101149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.101327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.101359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.101514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.101714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.101748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.101861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.101894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.102817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.102850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.103085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.386 [2024-12-14 22:45:32.103121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.386 qpair failed and we were unable to recover it. 00:36:11.386 [2024-12-14 22:45:32.103317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.103351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.103551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.103584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.103709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.103741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.105097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.105156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.105468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.105502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.105697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.105731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.105924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.105958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.106092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.106124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.106245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.106277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.106405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.106437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.106675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.106708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.106890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.106968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.107217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.107250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.107353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.107527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.107559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.107730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.107762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.107940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.107975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.108166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.108199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.108438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.108472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.108735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.108767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.109043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.109302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.109438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.109649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.109861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.109984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.110144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.110354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.110495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.110956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.110998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.111196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.111229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.111356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.111385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.111572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.111602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.111768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.111798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.111968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.111999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.112110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.112140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.112275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.112308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.112571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.112604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.112809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-12-14 22:45:32.112842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.387 qpair failed and we were unable to recover it. 00:36:11.387 [2024-12-14 22:45:32.113052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.113086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.114766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.114820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.115044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.115077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.115349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.115383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.115583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.115616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.115801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.115834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.116026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.116061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.116198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.116230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.116470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.116502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.116743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.116776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.116899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.116941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.118636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.118687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.118956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.118988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.119223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.119251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.119505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.119533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.119651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.119679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.119803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.119831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.120005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.120035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.120242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.120426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.120459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.120576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.120609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.120746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.120779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.121005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.121040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.121153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.121181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.121432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.121730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.121763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.121954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.121987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.122122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.122166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.122292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.122320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.122484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.122516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.122636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.122669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.122857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.122894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.123144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.123180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.123343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.388 [2024-12-14 22:45:32.123474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-12-14 22:45:32.123507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.388 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.123692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.123726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.123916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.123952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.124134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.124166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.124327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.124462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.124495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.124733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.124884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.124926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.125109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.125143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.125385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.125417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.125603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.125636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.125778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.125812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.125990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.126025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.126205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.126238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.126349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.126381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.126506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.126533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.126644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.126673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.126974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.127116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.127376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.127518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.127734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.127954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.127990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.128109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.128142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.128333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.128371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.128546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.128578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.128709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.128742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.128927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.128962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.129166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.129199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.129381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.129413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.129656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.129684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.129865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.130087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.130120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.130294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.130328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.130511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.130543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.130671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.130704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.130875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.130964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.131103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.131137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.131268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.131301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.131513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-12-14 22:45:32.131546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.389 qpair failed and we were unable to recover it. 00:36:11.389 [2024-12-14 22:45:32.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.131716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.131991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.132200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.132415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.132572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.132778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.132941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.132975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.133114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.133297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.133572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.133604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.133899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.134106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.134139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.134320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.134353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.134474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.134706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.134739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.134926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.134961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.135152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.135184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.135430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.135464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.135657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.135689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.135801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.135835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.135958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.135991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.136107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.136140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.136255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.136288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.136416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.136449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.136601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.136639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.136827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.136860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.137073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.137107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.137349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.137588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.137621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.137849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.137881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.138012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.138045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.138159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.138193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.138376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.138410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.138619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.138652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.138834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.138867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.139017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.139052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.139290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.139323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.139515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.139547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.139751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.390 [2024-12-14 22:45:32.139785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.390 qpair failed and we were unable to recover it. 00:36:11.390 [2024-12-14 22:45:32.139985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.140778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.140979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.141184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.141341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.141490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.141752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.141891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.141933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.142899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.142940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.143244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.143277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.143408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.143440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.143630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.143663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.143779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.143811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.143987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.144022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.144199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.144231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.144343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.144375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.144499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.144532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.144778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.144811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.144993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.145201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.145234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.145425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.145458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.145638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.145671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.145868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.145911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.146926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.146960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.147062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.391 [2024-12-14 22:45:32.147215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.391 [2024-12-14 22:45:32.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.391 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.147438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.147471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.147603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.147636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.147823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.147856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.147992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.148143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.148176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.148432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.148464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.148651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.148685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.148812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.148845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.148986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.149142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.149174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.149360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.149394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.149591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.149624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.149746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.149778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.149961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.150002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.150280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.150386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.150420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.150699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.150732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.150839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.150872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.151863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.151997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.152152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.152538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.152745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.152885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.152925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.153046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.153079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.153198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.153231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.153356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.153388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.153590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.153623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.392 [2024-12-14 22:45:32.153738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.392 [2024-12-14 22:45:32.153771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.392 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.153897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.153942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.154942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.154976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.155160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.155193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.155306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.155339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.155453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.155484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.155666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.155699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.155803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.155836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.156015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.156049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.156288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.156321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.156425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.156458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.156566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.156600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.156872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.156914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.157923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.157957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.158072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.158220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.158253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.158546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.158613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.158813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.158848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.158986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.159140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.159295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.159452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.159834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.160119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.160153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.160271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.160304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.160523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.160716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.160748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-12-14 22:45:32.160937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.393 [2024-12-14 22:45:32.160973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.161147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.161180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.161286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.161319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.161440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.161473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.161729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.161762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.161955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.161989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.162128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.162288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.162501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.162661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.162820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.162996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.163154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.163299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.163534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.163737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.163901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.163945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.164210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.164243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.164348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.164381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.164565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.164765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.164798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.164920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.164960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.165140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.165172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.165359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.165391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.165512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.165737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.165769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.166860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.166892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.167900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.167943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.168061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.168093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-12-14 22:45:32.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.394 [2024-12-14 22:45:32.168298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.168418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.169894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.170050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.170086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.170279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.170313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.171635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.171686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.171981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.172019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.172207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.172240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.172437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.172471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.172663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.172697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.172881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.172924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.173073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.173106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.173295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.173327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.173453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.173486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.173770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.173804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.174044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.174078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.174276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.174309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.174522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.174556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.174738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.174771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.174970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.175005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.175273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.175305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.175426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.175459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.175697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.175736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.175843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.175875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.176075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.176108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.176290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.176322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.176455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.176488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.176626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.176658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.176852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.176885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.177080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.177240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.177273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.177550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.177744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.177777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.177890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.178195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.178227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.178345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.178565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.178598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.178836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.178868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.178988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.179021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.179140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.395 [2024-12-14 22:45:32.179173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-12-14 22:45:32.179358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.179391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.179531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.179564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.179683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.179715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.179896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.179952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.180144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.180178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.180375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.180408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.180540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.180573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.180745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.180777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.180961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.180996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.181136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.181169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.181286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.181320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.181433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.181467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.181645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.181678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.181801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.181834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.182027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.182061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.182244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.182276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.182462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.182494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.182700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.182890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.182932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.183094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.183402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.183617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.183771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.183979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.184919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.185067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.185100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.185213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.185245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.185360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.185394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.396 [2024-12-14 22:45:32.185524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.396 [2024-12-14 22:45:32.185557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.396 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.185741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.185773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.185946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.185982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.186097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.186130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.186253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.186285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.186405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.186613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.186646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.186838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.186871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.187018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.187051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.187178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.187211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.187317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.187350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.187466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.187499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.187974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.188016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.188216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.188251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.189220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.189499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.189533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.189639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.189672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.189862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.189895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.190152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.190185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.190313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.190346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.190474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.190772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.190803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.190901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.190941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.191045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.191075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.191187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.191216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.191473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.191502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.191741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.191775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.191985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.192144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.192300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.192430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.192638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.192794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.192826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-12-14 22:45:32.193850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.397 qpair failed and we were unable to recover it. 00:36:11.397 [2024-12-14 22:45:32.193992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.194026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.194144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.194176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.194346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.194379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.194557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.194591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.194855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.194884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.195047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.195162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.195192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.195314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.195344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.195444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.195474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.196742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.196788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.196927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.196961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.197221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.197252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.197463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.197643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.197673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.197785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.197814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.197923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.197953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.198130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.198162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.198261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.198305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.198584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.198698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.198731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.198852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.198884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.199068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.199101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.199203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.199236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.199447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.199480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.199585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.199618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.200991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.201039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.201250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.201427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.201698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.201896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.202890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.202929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.203137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.203169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.203302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.203335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.203516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.203547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.398 qpair failed and we were unable to recover it. 00:36:11.398 [2024-12-14 22:45:32.203674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-12-14 22:45:32.203706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.203812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.203844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.204967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.205182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.205212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.205396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.205429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.205561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.205595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.205719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.205752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.205876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.205917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.206040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.206073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.206314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.206468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.206500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.206607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.206640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.206831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.207833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.207878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.208103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.208203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.208233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.208418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.208449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.208638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.208669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.208956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.209138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.209167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.209405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.209444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.209556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.209587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.209773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.209924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.209956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.210084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.210113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.210289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.210498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.210532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.210713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.210746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.210872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.210913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.399 qpair failed and we were unable to recover it. 00:36:11.399 [2024-12-14 22:45:32.211049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-12-14 22:45:32.211083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.211264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.211296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.211428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.211461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.211714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.211746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.211986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.212022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.212214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.212247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.212427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.212637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.212670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.212838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.212866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.212991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.213134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.213331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.213589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.213749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.213891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.213936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.214140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.214174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.214421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.214454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.214659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.214693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.214873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.214917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.215869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.215897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.216155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.216188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.216374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.216407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.216533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.216567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.216751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.216779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.216917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.216951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.217075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.217107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.217222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.217260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.217431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.217464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.217642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.217675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.217933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.217968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.218108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.218141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.218256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.218289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.218391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.218424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.218603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.218635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.400 [2024-12-14 22:45:32.220015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.400 [2024-12-14 22:45:32.220061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.400 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.220328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.220361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.220593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.220746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.220778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.220964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.221848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.221987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.401 [2024-12-14 22:45:32.222021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.401 qpair failed and we were unable to recover it. 00:36:11.401 [2024-12-14 22:45:32.222132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.222165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.222350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.222383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.222528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.222562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.222679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.222712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.222952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.222996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.223130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.223175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.223415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.223461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.223677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.223725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.223975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.224120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.224161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.224358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.224402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.224612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.224650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.224894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.224945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.225143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.225177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.225318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.225350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.225477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.225510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.225778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.225810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.225933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.225969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.226093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.226126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.226303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.226336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.226531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.226564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.226758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.226798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.226931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.226966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.227243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.227401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.227540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.227678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.227815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.227997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.228152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.228314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.228522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.228673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.228827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.228860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.229067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.229102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.229291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.229325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.229528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.686 [2024-12-14 22:45:32.229561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.686 qpair failed and we were unable to recover it. 00:36:11.686 [2024-12-14 22:45:32.229750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.229783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.229958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.229993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.230231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.230264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.230381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.230415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.230623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.230656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.230783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.230816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.231058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.231092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.231280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.231314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.231536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.231728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.231761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.231871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.231916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.232286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.232455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.232585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.232631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.232829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.232874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.233027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.233068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.233333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.233374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.233580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.233612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.233828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.233859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.234049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.234086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.234350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.234384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.234572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.234620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.234862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.234921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.235055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.235091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.235277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.235312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.235567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.235601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.235790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.235823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.236054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.236297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.236329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.236571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.236603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.236842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.236876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.237048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.237085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.237200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.237234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.237359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.237391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.237573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.237605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.237811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.237845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.238096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.238130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.238317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.238468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.687 [2024-12-14 22:45:32.238502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.687 qpair failed and we were unable to recover it. 00:36:11.687 [2024-12-14 22:45:32.238685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.238718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.238936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.239114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.239147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.239387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.239419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.239673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.239711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.239872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.240045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.240079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.240259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.240291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.240488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.240521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.240698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.241107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.241141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.241306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.241484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.241518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.241656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.241874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.241918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.242093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.242126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.242300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.242334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.242516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.242549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.242670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.242702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.242837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.242869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.243942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.243975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.244088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.244119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.244324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.244358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.244479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.244511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.244648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.244681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.244810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.244843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.245066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.245100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.245274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.245309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.245422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.245455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.245628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.245660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.245853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.245886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.246091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.246123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.246247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.688 [2024-12-14 22:45:32.246280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.688 qpair failed and we were unable to recover it. 00:36:11.688 [2024-12-14 22:45:32.246465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.246499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.246759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.246793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.246923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.246957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.247954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.247989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.248096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.248129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.248312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.248345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.248454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.248487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.248835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.248873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.249044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.249199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.249421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.249638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.249796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.249972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.250184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.250414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.250616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.250769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.250926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.250960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.251925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.251959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.252112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.252228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.252262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.252443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.252476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.252591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.252623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.252808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.252841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.253027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.253060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.253245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.253278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.689 [2024-12-14 22:45:32.253488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.689 qpair failed and we were unable to recover it. 00:36:11.689 [2024-12-14 22:45:32.253593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.253626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.253816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.253849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.254923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.254957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.255070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.255103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.255333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.255364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.255468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.255501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.255674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.255707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.256011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.256194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.256230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.256418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.256457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.256565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.256598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.256807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.256994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.257948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.257982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.258948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.258981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.259099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.259133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.259241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.259392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.259425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.690 [2024-12-14 22:45:32.259593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.690 [2024-12-14 22:45:32.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.690 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.259732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.259765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.259898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.259945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.260956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.260994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.261179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.261214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.261383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.261417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.261532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.261566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.261684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.261717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.261897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.261938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.262051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.262085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.262330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.262363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.262537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.262570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.262743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.262776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.262887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.262929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.263940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.263974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.264884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.264932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.265089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.265230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.265431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.265576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.265853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.265977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.266010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.266151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.266184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.266354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.266387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.266576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.691 [2024-12-14 22:45:32.266608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.691 qpair failed and we were unable to recover it. 00:36:11.691 [2024-12-14 22:45:32.266728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.266761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.266872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.267095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.267127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.267262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.267295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.267486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.267520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.267692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.267725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.267831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.267864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.268002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.268035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.268283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.268316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.268508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.268657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.268690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.268954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.268988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.269160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.269193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.269435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.269468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.269577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.269610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.269786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.269819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.269965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.270450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.270608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.270769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.270955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.271175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.271208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.271354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.271531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.271673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.271706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.271897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.271937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.272049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.272082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.272259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.272291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.272395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.272427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.272609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.272641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.272814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.272847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.273037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.273197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.273420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.273637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.273789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.273986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.274020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.274194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.692 [2024-12-14 22:45:32.274227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.692 qpair failed and we were unable to recover it. 00:36:11.692 [2024-12-14 22:45:32.274354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.274387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.274572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.274605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.274784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.274817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.275926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.275960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.276083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.276116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.276284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.276316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.276486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.276519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.276639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.276672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.276857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.276890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.277076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.277109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.277306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.277516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.277727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.277760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.277934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.278212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.278246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.278358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.278391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.278511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.278543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.278712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.278750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.278952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.278987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.279206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.279356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.279492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.279652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.279856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.279974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.280197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.280407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.280554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.280759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.280923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.280957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.281076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.281109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.281321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.281492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.281524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.281634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.281667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.693 qpair failed and we were unable to recover it. 00:36:11.693 [2024-12-14 22:45:32.281914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.693 [2024-12-14 22:45:32.281947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.282089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.282122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.282235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.282266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.282472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.282504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.282693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.282725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.282843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.282875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.283066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.283138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.283439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.283475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.283601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.283634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.283813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.284067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.284102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.284287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.284321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.284563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.284773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.284996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.285028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.285218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.285250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.285437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.285470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.285649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.285682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.285809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.285995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.286147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.286302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.286754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.286924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.286958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.287106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.287138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.287329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.287361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.287579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.287610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.287720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.287751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.287928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.287963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.288265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.288402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.288560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.694 [2024-12-14 22:45:32.288716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.694 qpair failed and we were unable to recover it. 00:36:11.694 [2024-12-14 22:45:32.288829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.288860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.289056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.289090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.289220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.289253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.289434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.289466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.289671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.289702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.289883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.289928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.290127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.290160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.290344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.290376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.290550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.290582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.290762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.290795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.290923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.290955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.291126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.291157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.291469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.291502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.291631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.291664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.291799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.291831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.291957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.291991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.292232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.292266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.292384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.292418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.292612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.292764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.292795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.292986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.293858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.293980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.294861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.294998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.295031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.295275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.295308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.295421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.295453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.295567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.295598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.295842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.295873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.296059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.296096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.296294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.695 [2024-12-14 22:45:32.296328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.695 qpair failed and we were unable to recover it. 00:36:11.695 [2024-12-14 22:45:32.296444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.296476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.296623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.296863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.296897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.297106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.297141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.297320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.297352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.297521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.297554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.297759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.297791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.297919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.297953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.298134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.298300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.298332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.298459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.298491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.298669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.298702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.298940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.298975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.299213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.299406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.299438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.299587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.299767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.299804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.299976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.300129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.300270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.300472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.300747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.300957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.301118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.301154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.301288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.301320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.301445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.301478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.301676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.301708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.301909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.301941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.302936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.302968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.303199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.303423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.303551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.303583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.303784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.303817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.303930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.696 [2024-12-14 22:45:32.303962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.696 qpair failed and we were unable to recover it. 00:36:11.696 [2024-12-14 22:45:32.304077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.304110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.304374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.304406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.304512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.304543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.304727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.304758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.304877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.305207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.305240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.305368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.305513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.305546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.305782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.305813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.305928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.305962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.306103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.306222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.306253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.306365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.306399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.306508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.306539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.306736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.306770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.307002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.307151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.307184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.307364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.307434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.307659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.307696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.307839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.308021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.308057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.308183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.308214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.308406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.308438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.308615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.308648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.308820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.308853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.309892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.309942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.310061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.310093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.310264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.310298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.310478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.310512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.310625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.310656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.310831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.310864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.311048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.311172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.311204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.311397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.311430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.697 qpair failed and we were unable to recover it. 00:36:11.697 [2024-12-14 22:45:32.311617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.697 [2024-12-14 22:45:32.311650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.311826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.311858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.312055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.312088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.312287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.312320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.312442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.312474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.312688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.312722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.312827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.312860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.313048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.313081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.313268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.313300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.313421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.313453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.313579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.313612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.313777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.314849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.314881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.315927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.315962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.316152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.316184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.316302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.316448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.316481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.316764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.316798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.317924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.317958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.318153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.318185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.318355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.318388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.318503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.318536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.318648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.318680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.318798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.318832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.698 [2024-12-14 22:45:32.319012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.698 [2024-12-14 22:45:32.319047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.698 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.319163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.319195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.319321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.319355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.319471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.319503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.319680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.319711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.319897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.319953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.320150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.320183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.320422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.320455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.320591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.320624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.320727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.320759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.320940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.320974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.321121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.321278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.321440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.321803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.322015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.322238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.322427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.322460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.322759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.322792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.322969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.323115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.323269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.323483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.323690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.323894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.323938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.324069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.324101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.324275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.324308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.324498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.324707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.324740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.324934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.324967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.325089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.325121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.325380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.325425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.325542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.325575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.325748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.325781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.699 [2024-12-14 22:45:32.325898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.699 [2024-12-14 22:45:32.325941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.699 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.326117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.326150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.326272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.326305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.326414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.326447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.326628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.326664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.326834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.326868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.327174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.327326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.327464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.327668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.327835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.327980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.328015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.328274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.328306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.328481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.328513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.328693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.328924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.328959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.329161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.329197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.329315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.329346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.329474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.329506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.329685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.329717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.329924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.329959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.330138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.330172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.330281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.330313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.330426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.330459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.330630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.330702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.330923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.330961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.331090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.331125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.331237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.331449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.331481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.331618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.331650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.331850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.331883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.332918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.332952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.333234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.700 qpair failed and we were unable to recover it. 00:36:11.700 [2024-12-14 22:45:32.333348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.700 [2024-12-14 22:45:32.333380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.333584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.333617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.333725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.333757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.333862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.333894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.334122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.334154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.334334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.334368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.334484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.334517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.334655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.334900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.334945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.335119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.335153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.335361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.335394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.335578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.335610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.335790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.335823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.335960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.335995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.336129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.336162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.336423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.336456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.336640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.336673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.336788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.336821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.337946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.338119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.338151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.338325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.338363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.338470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.338503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.338769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.338802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.338922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.338956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.339141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.339174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.339367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.339400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.339657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.339689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.339820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.339852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.340891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.340935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-12-14 22:45:32.341134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.701 [2024-12-14 22:45:32.341167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.341338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.341371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.341557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.341590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.341785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.341818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.341949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.341984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.342161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.342194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.342371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.342405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.342590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.342622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.342744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.342777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.342883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.342928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.343101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.343133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.343248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.343280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.343393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.343426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.343607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.343640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.343823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.343856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.344945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.344979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.345150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.345182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.345376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.345409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.345685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.345875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.345919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.346056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.346089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.346191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.346406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.346439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.346728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.346762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.346942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.346977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.347107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.347315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.347348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.347476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.347510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.347633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.347666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.347888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.348078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.348110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.348289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.348322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.348525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.348558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.348854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.348887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.349079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.702 [2024-12-14 22:45:32.349112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-12-14 22:45:32.349404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.349437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.349611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.349644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.349818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.349853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.350012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.350047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.350175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.350207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.350499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.350708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.350742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.350868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.350901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.351059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.351280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.351488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.351662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.351875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.351998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.352292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.352495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.352635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.352782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.352943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.352977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.353153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.353186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.353480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.353654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.353687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.353866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.353900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.354101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.354134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.354319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.354351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.354468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.354500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.354688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.354727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.354909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.354943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.355118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.355150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.355326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.355360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.355558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.355591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.355764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.355797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.355970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.356005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.356177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.356210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-12-14 22:45:32.356381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.703 [2024-12-14 22:45:32.356414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.356520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.356554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.356672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.356706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.356819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.356853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.356969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.357003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.357241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.357274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.357471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.357503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.357673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.357706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.357887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.357952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.358126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.358159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.358393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.358426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.358613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.358830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.358862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.359018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.359053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.359243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.359277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.359675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.359949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.359983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.360160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.360193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.360472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.360643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.360675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.360855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.360887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.361021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.361054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.361285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.361319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.361518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.361551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.361676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.361709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.361885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.361926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.362097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.362130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.362333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.362366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.362570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.362604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.362730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.362763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.362946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.363955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.363989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.364174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.364207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.704 [2024-12-14 22:45:32.364325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.704 [2024-12-14 22:45:32.364358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.704 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.364538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.364572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.364746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.364779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.364973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.365008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.365244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.365276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.365483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.365517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.365690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.365724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.365995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.366031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.366236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.366270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.366394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.366427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.366609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.366831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.366864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.367113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.367147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.367335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.367369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.367484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.367517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.367693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.367725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.367854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.367887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.368090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.368122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.368292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.368325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.368446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.368479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.368665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.368699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.368892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.369087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.369294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.369446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.369716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.369856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.369995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.370146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.370355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.370527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.370747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.370924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.371087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.371126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.371299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.371332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.371466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.371499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.371638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.371816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.372004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.372039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.705 [2024-12-14 22:45:32.372306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.705 [2024-12-14 22:45:32.372339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.705 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.372517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.372550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.372787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.372821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.373032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.373067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.373237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.373270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.373441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.373473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.373643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.373675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.373864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.373897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.374117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.374152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.374273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.374306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.374503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.374536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.374706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.374738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.374876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.375129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.375163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.375293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.375327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.375513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.375820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.375853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.376110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.376370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.376403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.376554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.376686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.376719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.377003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.377179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.377212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.377336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.377369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.377546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.377579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.377765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.377798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.378039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.378075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.378335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.378368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.378552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.378783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.378817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.379012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.379047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.379216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.379249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.379377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.379410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.379639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.379825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.379864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.380069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.380104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.380275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.380308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.380478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.380512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.380760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.380793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.380913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.706 [2024-12-14 22:45:32.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.706 qpair failed and we were unable to recover it. 00:36:11.706 [2024-12-14 22:45:32.381133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.381429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.381462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.381652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.381685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.381797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.381830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.381956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.381991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.382182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.382214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.382425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.382577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.382610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.382813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.382846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.383046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.383081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.383271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.383303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.383541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.383573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.383758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.383791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.383970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.384005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.384121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.384154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.384394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.384427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.384609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.384643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.384883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.384923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.385093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.385126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.385335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.385547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.385580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.385796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.385954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.385989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.386116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.386150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.386328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.386361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.386599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.386632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.386746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.386779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.386891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.386934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.387125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.387158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.387272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.387306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.387515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.387548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.387732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.388025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.388060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.388179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.388212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.388553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.388587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.388853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.388886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.389010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.389043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.389255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.707 [2024-12-14 22:45:32.389287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.707 qpair failed and we were unable to recover it. 00:36:11.707 [2024-12-14 22:45:32.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.389454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.389658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.389691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.389806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.389839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.390063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.390244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.390278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.390395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.390428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.390614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.390647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.390779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.390812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.391053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.391088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.391209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.391242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.391416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.391449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.391653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.391687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.391968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.392192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.392225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.392337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.392370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.392485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.392520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.392781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.392813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.393018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.393053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.393241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.393274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.393465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.393497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.393672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.393704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.393899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.393943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.394193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.394227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.394407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.394651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.394685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.394854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.394886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.395129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.395370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.395402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.395506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.395540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.395656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.395689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.395874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.395925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.396113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.396146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.396319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.396352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.396473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.396507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.708 [2024-12-14 22:45:32.396769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.708 [2024-12-14 22:45:32.396802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.708 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.396986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.397027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.397187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.397428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.397461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.397727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.397760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.397947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.397981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.398164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.398197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.398468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.398501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.398772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.398804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.398981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.399015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.399286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.399320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.399507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.399723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.399756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.399946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.399980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.400223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.400475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.400509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.400683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.400717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.400982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.401132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.401351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.401622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.401915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.401950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.402145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.402178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.402357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.402390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.402510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.402543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.402759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.402792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.403064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.403276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.403420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.403642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.403807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.403997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.404032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.404231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.404264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.404520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.404553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.404660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.404695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.404885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.405047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.405078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.405269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.405301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.405549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.709 [2024-12-14 22:45:32.405581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.709 qpair failed and we were unable to recover it. 00:36:11.709 [2024-12-14 22:45:32.405708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.405740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.405854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.405893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.406124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.406157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.406275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.406308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.406433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.406466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.406666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.406699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.406891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.406937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.407126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.407160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.407273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.407306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.407544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.407576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.407685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.407719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.407892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.407948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.408912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.408947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.409164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.409350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.409383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.409564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.409597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.409838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.409871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.410083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.410118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.410311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.410344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.410460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.410491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.410622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.410655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.410832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.410864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.411069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.411104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.411296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.411329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.411567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.411600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.411781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.411814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.411994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.412029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.412218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.412251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.412447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.412481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.412595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.412627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.412739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.412772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.412975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.413009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.413131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.413163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.413278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.710 [2024-12-14 22:45:32.413312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.710 qpair failed and we were unable to recover it. 00:36:11.710 [2024-12-14 22:45:32.413485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.413518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.413634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.413666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.413788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.413825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.413958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.413993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.414227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.414335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.414368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.414550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.414582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.414845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.414878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.415073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.415107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.415225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.415257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.415380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.415413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.415655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.415689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.415893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.415936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.416044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.416075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.416198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.416232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.416355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.416387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.416590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.416826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.416858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.417141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.417313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.417346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.417593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.417626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.417848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.417881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.418027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.418060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.418242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.418465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.418499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.418682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.418715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.418886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.418929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.419102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.419134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.419340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.419373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.419672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.419888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.419942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.420131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.420165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.420346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.420508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.420541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.420736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.420769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.420969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.421004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.421194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.421418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.421451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.421625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.421657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.711 qpair failed and we were unable to recover it. 00:36:11.711 [2024-12-14 22:45:32.421850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.711 [2024-12-14 22:45:32.421884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.422088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.422363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.422397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.422523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.422681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.422714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.422913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.422948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.423137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.423171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.423459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.423492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.423927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.423962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.424144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.424177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.424285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.424318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.424450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.424483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.424719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.424752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.424878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.424921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.425041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.425075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.425249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.425283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.425479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.425518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.425640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.425674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.425818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.425851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.426080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.426336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.426370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.426541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.426574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.426750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.426783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.426974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.427008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.427142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.427175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.427389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.427422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.427627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.427660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.427775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.427984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.428018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.428227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.428260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.428533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.428567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.428739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.428893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.428935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.429235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.712 qpair failed and we were unable to recover it. 00:36:11.712 [2024-12-14 22:45:32.429357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.712 [2024-12-14 22:45:32.429392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.429577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.429609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.429734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.429767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.429948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.429984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.430201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.430233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.430487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.430521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.430705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.430738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.430926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.430961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.431186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.431218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.431397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.431431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.431639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.431741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.431774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.431912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.431946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.432189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.432223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.432343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.432376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.432563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.432595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.432729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.432761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.433060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.433096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.433215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.433247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.433509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.433542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.433651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.433684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.433812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.433844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.434114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.434154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.434332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.434366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.434481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.434514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.434749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.434783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.434933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.435155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.435378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.435411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.435656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.435690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.435817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.435851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.436110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.436144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.436256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.436289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.436477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.436509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.436632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.436665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.436870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.437096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.437169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.437377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.437415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.437625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.437840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.713 [2024-12-14 22:45:32.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.713 qpair failed and we were unable to recover it. 00:36:11.713 [2024-12-14 22:45:32.438097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.438130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.438249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.438281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.438539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.438573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.438771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.438804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.439919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.439954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.440183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.440254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.440393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.440430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.440607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.440640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.440857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.440890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.441091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.441125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.441244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.441277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.441516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.441549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.441761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.441794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.441994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.442352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.442561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.442789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.442959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.442994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.443209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.443242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.443438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.443471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.443595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.443628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.443751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.443783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.444921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.444956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.445138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.445171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.445342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.445374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.445605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.445638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.445836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.445874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.446163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.446197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.446368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.714 [2024-12-14 22:45:32.446401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.714 qpair failed and we were unable to recover it. 00:36:11.714 [2024-12-14 22:45:32.446583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.446616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.446799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.446831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.447961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.447994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.448178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.448327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.448624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.448658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.448864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.448897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.449029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.449060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.449298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.449522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.449555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.449674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.449707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.449950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.449984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.450251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.450284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.450478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.450510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.450623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.450655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.450851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.450883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.451070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.451104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.451286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.451319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.451491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.451525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.451774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.451968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.452240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.452274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.452391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.452423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.452607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.452640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.452920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.452953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.453180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.453303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.453336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.453511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.453658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.453690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.453859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.453893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.454083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.454117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.454354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.454387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.454626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.454660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.454778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.454811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.715 [2024-12-14 22:45:32.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.715 [2024-12-14 22:45:32.455021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.715 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.455268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.455301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.455487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.455521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.455762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.455795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.455919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.455953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.456234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.456267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.456384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.456416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.456606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.456639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.456814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.456847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.457044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.457078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.457213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.457246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.457455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.457486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.457592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.457625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.457803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.457836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.458021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.458055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.458177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.458209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.458448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.458480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.458723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.458756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.458946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.458981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.459155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.459188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.459364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.459397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.459518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.459550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.459788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.459820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.459994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.460027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.460149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.460182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.460365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.460399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.460615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.460797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.460829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.461027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.461062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.461184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.461217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.461394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.461426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.461602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.461635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.461822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.461853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.462080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.462238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.462529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.462687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.716 [2024-12-14 22:45:32.462847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.716 qpair failed and we were unable to recover it. 00:36:11.716 [2024-12-14 22:45:32.462977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.463011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.463197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.463231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.463423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.463455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.463744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.463778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.463948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.463982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.464194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.464226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.464352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.464385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.464708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.464740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.464989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.465147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.465304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.465452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.465657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.465806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.465841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.466094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.466128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.466257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.466289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.466473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.466710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.466743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.466960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.466995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.467181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.467214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.467502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.467675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.467708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.467899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.467942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.468206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.468239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.468499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.468532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.468776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.468988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.469157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.469398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.469432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.469567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.469600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.469705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.469737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.469876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.469939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.470226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.470258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.470372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.470404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.470526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.470559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.470667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.470699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.717 [2024-12-14 22:45:32.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.717 [2024-12-14 22:45:32.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.717 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.471197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.471230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.471426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.471458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.471651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.471928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.471961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.472154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.472185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.472322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.472356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.472540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.472573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.472747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.472780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.472962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.473127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.473159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.473334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.473368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.473554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.473777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.473810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.473978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.474012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.474187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.474219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.474511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.474543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.474812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.474845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.475085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.475119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.475362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.475401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.475654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.475686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.475866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.475899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.476024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.476063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.476168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.476199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.476437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.476470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.476649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.476682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.476795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.476828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.477098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.477133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.477317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.477350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.477533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.477566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.477769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.477952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.477987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.478124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.478157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.478350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.478383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.478561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.478593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.478781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.478814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.478989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.479023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.479242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.479510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.479543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.479739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.479773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.718 [2024-12-14 22:45:32.479949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.718 [2024-12-14 22:45:32.479983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.718 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.480209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.480243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.480428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.480460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.480641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.480675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.480786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.480818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.481113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.481298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.481331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.481467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.481501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.481631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.481663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.481921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.481954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.482153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.482185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.482364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.482605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.482638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.482831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.482863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.483132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.483167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.483348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.483381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.483575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.483608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.483874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.483914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.484093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.484126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.484240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.484270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.484536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.484573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.484754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.484787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.484965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.484997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.485204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.485237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.485426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.485460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.485702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.485996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.486031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.486153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.486186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.486362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.486397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.486519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.486551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.486733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.486764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.487021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.487056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.487287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.487419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.487452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.487644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.487677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.487859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.487891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.488089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.488121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.488374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.488406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.488697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.488730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.489028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.489062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.719 [2024-12-14 22:45:32.489318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.719 [2024-12-14 22:45:32.489351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.719 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.489542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.489749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.489783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.490050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.490086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.490305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.490338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.490515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.490548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.490721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.490970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.491010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.491137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.491409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.491442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.491632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.491666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.491859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.491891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.492094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.492127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.492260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.492293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.492477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.492509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.492684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.492717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.492853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.492887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.493049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.493258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.493413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.493818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.493991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.494025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.494262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.494297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.494554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.494586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.494871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.494912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.495105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.495138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.495376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.495409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.495610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.495643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.495770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.495802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.495994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.496029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.496153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.496186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.496448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.496481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.496707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.496957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.496992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.497237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.497271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.497449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.497483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.497727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.497759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.497937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.498234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.498267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.720 qpair failed and we were unable to recover it. 00:36:11.720 [2024-12-14 22:45:32.498401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.720 [2024-12-14 22:45:32.498433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.498570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.498603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.498720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.498752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.499022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.499058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.499190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.499221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.499398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.499431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.499668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.499990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.500165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.500204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.500469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.500502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.500681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.500929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.500964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.501103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.501137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.501274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.501306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.501595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.501628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.501809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.501841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.502049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.502082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.502206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.502239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.502427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.502460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.502628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.502661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.502863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.502895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.503189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.503223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.503486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.503521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.503686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.503719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.503960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.503994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.504185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.504219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.504350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.504382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.504591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.504624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.504811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.504844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.505054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.505089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.505327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.505361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.505545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.505577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.505752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.505786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.506070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.506245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.721 [2024-12-14 22:45:32.506277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.721 qpair failed and we were unable to recover it. 00:36:11.721 [2024-12-14 22:45:32.506448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.506690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.506723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.506946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.506980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.507244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.507414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.507446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.507637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.507670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.507934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.507969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.508253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.508286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.508520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.508552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.508815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.508848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.509136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.509171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.509458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.509491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.509674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.509709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.509951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.509987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.510168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.510203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.510393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.510427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.510632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.510665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.510846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.510878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.511081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.511113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.511277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.511311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.511503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.511536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.511769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.511803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.511988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.512025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.512268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.512303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.512427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.512468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.512651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.512684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.512888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.512929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.513118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.513152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.513413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.513446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.513703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.513738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.513980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.514018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.514187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.514224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.514465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.514503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.514616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.514650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.514923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.514964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.515257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.515293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.515541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.515576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.515707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.515745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.516020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.516056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.722 [2024-12-14 22:45:32.516351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.722 qpair failed and we were unable to recover it. 00:36:11.722 [2024-12-14 22:45:32.516641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.516677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.516862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.517083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.517128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.517324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.517356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.517479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.517513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.517770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.517808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.517984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.518021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.518228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.518260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.518471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.518505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.518713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.518746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.519029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.519222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.519257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.519519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.519553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.519677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.519711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.520010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.520223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.520257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.520430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.520724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.520758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.520999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.521246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.521281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.521577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.521863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.521896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.522047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.522080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.522353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.522388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.522637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.522671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.522859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.522893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.523096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.523131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.523263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.523297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.523485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.523523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.523717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.523750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.523864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.523898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.524032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.524065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.524206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.524238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.524361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.524396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.524582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.524616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.524737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.524769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.525011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.525046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.525224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.525258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.525468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.525501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.723 [2024-12-14 22:45:32.525741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.723 [2024-12-14 22:45:32.525775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.723 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.525954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.525989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.526095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.526129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.526401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.526436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.526677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.526712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.526977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.527012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.527147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.527446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.527481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.527674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.527706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.527900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.527944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.528054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.528086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.528215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.528441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.528474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.528677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.528709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.528917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.528951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.529145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.529177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.529356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.529390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.529520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.529555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.529807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.529840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.529975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.530119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.530275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.530423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.530593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.530828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.530861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.531063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.531098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.531203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.531236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.531493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.531528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.531807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.532047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.532082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.532274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.532313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.532502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.532536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.532775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.533017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.533053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.533244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.533279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.533502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.533535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.533721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.533756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.533939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.534167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.534201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.534441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.534474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.724 qpair failed and we were unable to recover it. 00:36:11.724 [2024-12-14 22:45:32.534742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.724 [2024-12-14 22:45:32.534776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.534889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.534930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.535182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.535216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.535457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.535492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.535736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.535770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.535969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.536003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.536223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.536339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.536370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.536602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.536643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.536967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.537106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.537138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.537380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.537413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.537584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.537616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.537856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.538119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.538152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.538280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.538313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.538487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.538520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.538765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.538799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.538983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.539018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.539328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.539362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.539552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.539586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.539867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.539899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.540169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.540202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.540388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.540421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.540555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.540588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.540716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.540748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.540878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.540917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.541025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.541059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.541176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.541208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.541360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.541488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.541520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.541707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.541777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.542048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.725 [2024-12-14 22:45:32.542087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.725 qpair failed and we were unable to recover it. 00:36:11.725 [2024-12-14 22:45:32.542299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.542334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.542529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.542562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.542687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.542723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.542926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.542962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.543138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.543171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.543320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.543515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.543551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.543682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.543724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.543923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.726 [2024-12-14 22:45:32.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:11.726 qpair failed and we were unable to recover it. 00:36:11.726 [2024-12-14 22:45:32.544204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.544237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.544413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.544447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.544632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.544674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.544870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.544915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.545100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.545238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.545270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.545462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.545495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.545632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.545668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.003 [2024-12-14 22:45:32.545856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.003 [2024-12-14 22:45:32.545888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.003 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.546052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.546085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.546265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.546298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.546429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.546461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.546653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.546685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.546855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.546888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.547134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.547313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.547346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.547596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.547629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.547800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.547833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.548025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.548059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.548297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.548330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.548614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.548737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.548896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.548935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.549197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.549230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.549414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.549446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.549619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.549651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.549917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.549951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.550091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.550123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.550308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.550340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.550569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.550607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.550717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.550750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.550932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.550968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.551231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.551264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.551457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.551491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.551730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.551764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.551955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.551989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.552251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.552284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.552536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.552571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.552777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.552810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.553066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.553104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.553342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.553375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.553513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.553545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.553739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.553772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.553985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.554260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.554292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.554410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.554443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.554584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.004 qpair failed and we were unable to recover it. 00:36:12.004 [2024-12-14 22:45:32.554871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.004 [2024-12-14 22:45:32.554934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.555176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.555211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.555334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.555367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.555547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.555578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.555750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.555784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.555921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.555956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.556074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.556107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.556345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.556378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.556486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.556517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.556759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.556988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.557023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.557146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.557179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.557436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.557468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.557672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.557705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.557934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.557968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.558176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.558383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.558416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.558533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.558566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.558737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.558900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.558944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.559120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.559269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.559301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.559813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.559846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.559967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.560001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.560207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.560239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.560425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.560458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.560559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.560590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.560770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.560803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.561060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.561095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.561297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.561330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.561511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.561544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.561684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.561717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.561983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.562210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.562243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.562432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.562465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.562636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.562801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.562835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.562973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.563009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.563139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.005 [2024-12-14 22:45:32.563172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.005 qpair failed and we were unable to recover it. 00:36:12.005 [2024-12-14 22:45:32.563437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.563470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.563762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.563797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.563988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.564022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.564148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.564181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.564426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.564604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.564637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.564914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.564949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.565147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.565182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.565357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.565389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.565672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.565706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.565817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.565855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.566189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.566223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.566487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.566520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.566636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.566670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.566933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.566968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.567232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.567265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.567455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.567488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.567746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.567779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.568007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.568041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.568280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.568313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.568566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.568600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.568774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.568807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.569035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.569069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.569329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.569362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.569602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.569635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.569873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.569920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.570046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.570078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.570253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.570286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.570479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.570511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.570701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.570733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.570924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.570959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.571171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.571371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.571403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.571616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.571649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.571856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.571888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.572076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.572110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.572308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.572342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.572539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.572583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.572754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.572788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.572924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.006 [2024-12-14 22:45:32.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.006 qpair failed and we were unable to recover it. 00:36:12.006 [2024-12-14 22:45:32.573226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.573533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.573566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.573844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.573877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.574158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.574192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.574445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.574479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.574793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.575000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.575035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.575215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.575248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.575515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.575548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.576020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.576242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.576276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.576458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.576491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.576750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.576783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.577051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.577085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.577274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.577567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.577600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.577779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.577940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.577975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.578164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.578196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.578367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.578400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.578657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.578689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.578881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.578924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.579129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.579161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.579330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.579364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.579633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.579667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.579939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.579974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.580154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.580187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.580363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.580396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.580597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.580630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.580901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.581120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.581153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.581328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.581360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.581545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.581579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.581851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.581883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.582124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.582292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.582325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.582612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.582646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.582908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.582948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.007 [2024-12-14 22:45:32.583206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.007 [2024-12-14 22:45:32.583239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.007 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.583520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.583553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.583863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.584057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.584091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.584289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.584322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.584518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.584552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.584801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.584834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.585026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.585060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.585300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.585333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.585622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.585655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.585941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.586154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.586187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.586375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.586408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.586552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.586586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.586788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.586820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.587079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.587357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.587390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.587644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.587678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.587924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.587958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.588165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.588353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.588387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.588702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.588976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.589012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.589288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.589564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.589597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.589857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.589891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.590108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.590148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.590430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.590464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.590648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.590681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.590856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.590890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.591092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.591126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.591331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.591622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.591655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.008 [2024-12-14 22:45:32.591759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.008 [2024-12-14 22:45:32.591792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.008 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.591981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.592016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.592201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.592234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.592449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.592482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.592744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.592777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.592997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.593031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.593330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.593589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.593623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.593796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.593830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.594119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.594154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.594333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.594366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.594629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.594925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.594958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.595083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.595116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.595229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.595263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.595545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.595577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.595848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.595882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.596026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.596060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.596186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.596219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.596407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.596441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.596711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.596745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.596934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.596969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.597236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.597270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.597559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.597610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.597823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.597855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.598131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.598166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.598282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.598316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.598582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.598616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.598810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.598843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.599041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.599075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.599339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.599373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.599656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.599688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.599963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.599998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.600253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.600287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.600582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.600845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.600879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.601103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.601222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.601552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.601791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.601825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.009 qpair failed and we were unable to recover it. 00:36:12.009 [2024-12-14 22:45:32.601997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.009 [2024-12-14 22:45:32.602033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.602322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.602608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.602640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.602846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.602879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.603167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.603376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.603409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.603668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.603701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.603872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.603914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.604194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.604227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.604404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.604437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.604644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.604677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.604944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.604979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.605160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.605193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.605459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.605715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.605748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.605991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.606026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.606244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.606278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.606533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.606567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.606750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.606784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.607049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.607085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.607207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.607241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.607481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.607520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.607809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.607843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.608131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.608166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.608357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.608390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.608509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.608543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.608810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.608844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.609148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.609185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.609383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.609416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.609590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.609624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.609819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.609853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.610127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.610162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.610442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.610475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.610678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.610712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.610973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.611010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.611222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.611255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.611500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.611533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.611823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.611856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.612142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.612177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.612494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.010 [2024-12-14 22:45:32.612528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.010 qpair failed and we were unable to recover it. 00:36:12.010 [2024-12-14 22:45:32.612779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.612812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.612986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.613021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.613223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.613434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.613467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.613727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.613955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.613990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.614116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.614150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.614286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.614318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.614536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.614569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.614769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.614803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.615006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.615041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.615305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.615338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.615628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.615662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.615795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.615835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.616076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.616111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.616354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.616387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.616515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.616549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.616719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.616753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.616965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.617000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.617258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.617292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.617487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.617658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.617692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.617809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.617848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.618059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.618094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.618363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.618397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.618593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.618627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.618875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.618921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.619171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.619205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.619491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.619524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.619768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.619802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.620086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.620389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.620424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.620608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.620641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.620852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.621109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.621341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.621374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.621579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.621612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.621791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.621825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.622027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.622062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.622324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.011 [2024-12-14 22:45:32.622358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.011 qpair failed and we were unable to recover it. 00:36:12.011 [2024-12-14 22:45:32.622608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.622641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.622814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.622848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.623127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.623162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.623456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.623659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.623692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.623935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.623970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.624263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.624297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.624470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.624503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.624751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.624785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.625046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.625082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.625278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.625563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.625596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.625852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.625886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.626188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.626223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.626401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.626435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.626703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.626736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.626934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.626970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.627147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.627181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.627446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.627479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.627746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.627780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.628105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.628234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.628269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.628531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.628564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.628781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.629039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.629074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.629347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.629381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.629567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.629601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.629845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.629879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.630008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.630243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.630277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.630496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.630529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.630796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.630830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.631057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.631092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.631219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.631520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.012 [2024-12-14 22:45:32.631554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.012 qpair failed and we were unable to recover it. 00:36:12.012 [2024-12-14 22:45:32.631749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.631783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.631972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.632008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.632141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.632174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.632624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.632658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.632865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.632899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.633104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.633138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.633383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.633417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.633679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.633713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.633841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.633874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.634078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.634114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.634357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.634390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.634566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.634600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.634840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.634874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.635182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.635217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.635605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.635949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.635984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.636206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.636385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.636419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.636607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.636640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.636837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.636870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.637157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.637192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.637497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.637784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.637820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.638022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.638057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.638235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.638275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.638521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.638555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.638736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.638771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.638974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.639238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.639532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.639567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.639768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.639802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.639994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.640029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.640299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.640332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.640578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.640612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.640843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.641072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.641107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.641303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.641337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.641467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.641502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.013 qpair failed and we were unable to recover it. 00:36:12.013 [2024-12-14 22:45:32.641776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.013 [2024-12-14 22:45:32.641809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.642090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.642127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.642406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.642440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.642684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.642717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.642979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.643015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.643261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.643295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.643477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.643511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.643755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.643789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.644078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.644115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.644384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.644418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.644689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.644722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.645014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.645050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.645316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.645349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.645640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.645674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.645818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.645851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.646039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.646075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.646269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.646304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.646501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.646539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.646804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.646838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.647109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.647143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.647431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.647465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.647644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.647677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.647871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.647916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.648168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.648202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.648415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.648449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.648729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.648952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.648987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.649122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.649155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.649425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.649459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.649671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.649912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.649949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.650131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.650166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.650412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.650447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.650658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.650692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.650953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.650990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.651284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.651318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.651580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.651613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.651732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.651948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.651984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.652180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.014 [2024-12-14 22:45:32.652214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.014 qpair failed and we were unable to recover it. 00:36:12.014 [2024-12-14 22:45:32.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.652445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.652732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.652766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.653032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.653067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.653334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.653368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.653609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.653648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.653954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.654226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.654259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.654467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.654827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.655092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.655127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.655320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.655354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.655535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.655568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.655716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.655750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.655967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.656002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.656211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.656256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.656556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.656589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.656862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.657182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.657216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.657503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.657822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.657855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.658138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.658175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.658370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.658403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.658673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.658708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.658836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.658870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.659152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.659188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.659468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.659503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.659695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.659729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.659961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.659997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.660242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.660276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.660417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.660452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.660721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.660756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.661014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.661050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.661291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.661325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.661598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.661631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.661820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.661853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.662170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.662207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.662328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.662362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.662556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.662589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.662864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.015 [2024-12-14 22:45:32.663132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.015 qpair failed and we were unable to recover it. 00:36:12.015 [2024-12-14 22:45:32.663323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.663357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.663603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.663637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.663892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.663938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.664233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.664519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.664553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.664779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.664818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.665070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.665107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.665331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.665655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.665690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.665921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.665957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.666268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.666302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.666424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.666459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.666661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.666695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.666979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.667209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.667485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.667519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.667764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.667798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.668105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.668141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.668361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.668493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.668527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.668726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.668760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.669029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.669066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.669199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.669232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.669425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.669459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.669674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.669864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.669898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.670127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.670162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.670427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.670461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.670670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.670704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.670981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.671017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.671267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.671300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.671597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.671631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.671922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.671962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.672257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.672292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.672472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.672778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.672812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.673110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.673146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.673407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.673441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.673737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.673772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.673963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.016 [2024-12-14 22:45:32.673999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.016 qpair failed and we were unable to recover it. 00:36:12.016 [2024-12-14 22:45:32.674275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.674309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.674590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.674818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.674852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.675042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.675079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.675242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.675493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.675527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.675832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.675867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.676138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.676174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.676356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.676391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.676669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.676702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.676970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.677006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.677296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.677330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.677556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.677590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.677840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.677874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.678156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.678191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.678464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.678498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.678746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.678781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.679041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.679078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.679293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.679327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.679548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.679582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.679868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.679913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.680185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.680219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.680347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.680632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.680666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.680793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.680827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.681042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.681078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.681330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.681365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.681624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.681658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.681855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.681889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.682222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.682339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.682373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.682649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.682683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.683153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.683194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.683421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.017 [2024-12-14 22:45:32.683456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.017 qpair failed and we were unable to recover it. 00:36:12.017 [2024-12-14 22:45:32.683608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.683642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.683827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.683861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.684074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.684110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.684300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.684333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.684612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.684646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.684851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.684885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.685150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.685185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.685463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.685498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.685749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.685784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.686094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.686131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.686317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.686351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.686630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.686664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.686889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.687170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.687205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.687456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.687491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.687785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.688071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.688106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.688384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.688419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.688699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.688733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.688946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.689299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.689333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.689607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.689642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.689949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.689986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.690191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.690225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.690409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.690443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.690696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.690737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.690940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.690978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.691291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.691593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.691627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.691896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.691941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.692218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.692253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.692528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.692852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.692887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.693119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.693323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.693357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.693560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.693595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.693932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.694213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.694247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.694474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.694509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.018 [2024-12-14 22:45:32.694767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.018 [2024-12-14 22:45:32.694801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.018 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.695014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.695050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.695234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.695268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.695463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.695497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.695612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.695829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.695864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.696129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.696165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.696417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.696452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.696761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.696796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.697037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.697314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.697348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.697649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.697683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.697994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.698031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.698295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.698329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.698527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.698562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.698700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.698735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.699003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.699040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.699223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.699259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.699539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.699573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.699699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.699734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.699985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.700020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.700312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.700609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.700645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.700918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.700953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.701094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.701129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.701330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.701364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.701628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.701661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.701869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.701934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.702123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.702158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.702462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.702497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.702707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.702742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.703020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.703274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.703308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.703508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.703543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.703727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.703761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.704038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.704074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.704262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.704297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.704501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.704535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.704803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.704837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.705119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.705156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.019 qpair failed and we were unable to recover it. 00:36:12.019 [2024-12-14 22:45:32.705436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.019 [2024-12-14 22:45:32.705469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.705752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.706074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.706110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.706386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.706420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.706647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.706683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.706983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.707020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.707221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.707256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.707469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.707503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.707730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.707765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.708018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.708054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.708253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.708288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.708467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.708501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.708689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.708723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.709000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.709037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.709286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.709321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.709518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.709554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.709877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.710242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.710518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.710553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.710836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.710871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.711103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.711139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.711320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.711355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.711623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.711658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.711843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.711878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.712017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.712052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.712360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.712633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.712668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.712865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.712899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.713102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.713137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.713319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.713354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.713570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.713604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.713882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.713937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.714222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.714256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.714462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.714497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.714748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.714782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.715045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.715082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.715279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.715314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.715426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.715461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.715713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.715747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.716056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.020 [2024-12-14 22:45:32.716092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.020 qpair failed and we were unable to recover it. 00:36:12.020 [2024-12-14 22:45:32.716347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.716381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.716685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.716720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.716990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.717027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.717242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.717277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.717483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.717516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.717794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.717829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.718109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.718145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.718365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.718400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.718679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.718923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.718959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.719213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.719248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.719401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.719619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.719654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.719933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.719969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.720249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.720283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.720562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.720603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.720881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.720924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.721186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.721222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.721429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.721463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.721765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.721800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.722062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.722098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.722227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.722261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.722465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.722500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.722779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.722814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.723066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.723103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.723360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.723394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.723670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.723705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.723985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.724021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.724170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.724204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.724487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.724523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.724773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.724807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.725078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.725394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.725429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.725628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.725662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.725962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.021 [2024-12-14 22:45:32.725999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.021 qpair failed and we were unable to recover it. 00:36:12.021 [2024-12-14 22:45:32.726292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.726327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.726594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.726628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.726855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.726890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.727119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.727154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.727301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.727335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.727542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.727577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.727762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.727797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.728112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.728149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.728386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.728423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.728559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.728595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.728872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.728917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.729175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.729210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.729444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.729479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.729601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.729636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.729805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.730077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.730114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.730371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.730406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.730704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.730738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.731023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.731059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.731337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.731564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.731599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.731878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.731925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.732168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.732202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.732507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.732541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.732832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.732865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.733176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.733212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.733487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.733521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.733809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.733843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.734122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.734157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.734365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.734400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.734581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.734614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.734815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.734849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.735064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.735100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.735287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.735321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.735534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.735567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.735787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.735821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.736045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.736081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.736361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.736395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.736678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.736713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.022 [2024-12-14 22:45:32.737022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.022 qpair failed and we were unable to recover it. 00:36:12.022 [2024-12-14 22:45:32.737252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.737287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.737469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.737503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.737772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.737806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.738061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.738300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.738334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.738523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.738556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.739083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.739118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.739303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.739343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.739528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.739562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.739843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.739877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.740132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.740167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.740314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.740348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.740863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.741171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.741381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.741416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.741699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.741732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.741935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.741971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.742159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.742193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.742639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.742674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.742933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.742969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.743172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.743207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.743424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.743458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.743649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.743683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.743936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.743973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.744260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.744294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.744570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.744604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.744886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.744930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.745219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.745254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.745458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.745493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.745772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.745807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.745988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.746024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.746289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.746323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.746924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.746961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.747141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.747175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.747363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.747398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.023 [2024-12-14 22:45:32.747598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.023 [2024-12-14 22:45:32.747632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.023 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.747885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.747929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.748223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.748258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.748519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.748554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.748855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.748889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.749173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.749206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.749388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.749422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.749652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.749685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.749960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.749997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.750283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.750317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.750519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.750765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.750800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.751073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.751109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.751409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.751443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.751704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.751738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.751929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.751964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.752166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.752201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.752475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.752510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.752650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.752683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.752819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.753079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.753115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.753307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.753596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.753629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.753836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.753871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.754160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.754196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.754417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.754450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.754750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.754784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.755024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.755060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.755315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.755350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.755488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.755522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.755771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.755805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.756010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.756045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.756244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.756278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.756549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.756584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.756864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.756898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.757178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.757212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.757453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.757488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.757675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.757714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.757994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.758031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.758358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.758392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.024 [2024-12-14 22:45:32.758656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.024 [2024-12-14 22:45:32.758692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.024 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.758887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.758935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.759205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.759239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.759376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.759411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.759699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.760004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.760042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.760364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.760514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.760548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.760735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.760770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.761054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.761090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.761393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.761426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.761714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.761749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.762023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.762360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.762395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.762654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.762689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.762989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.763025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.763281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.763316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.763574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.763822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.763856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.764008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.764045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.764252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.764286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.764563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.764596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.764851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.764887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.765165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.765200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.765398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.765433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.765622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.765657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.765928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.765971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.766193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.766230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.766481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.766516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.766776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.766811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.766963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.766999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.767298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.767339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.767618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.767655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.767790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.767829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.768084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.768377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.768414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.025 qpair failed and we were unable to recover it. 00:36:12.025 [2024-12-14 22:45:32.768673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.025 [2024-12-14 22:45:32.768716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.768864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.768898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.769180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.769223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.769494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.769531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.769816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.769852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.770097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.770136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.770439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.770475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.770758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.770794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.771072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.771111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.771631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.771668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.771956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.771994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.772260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.772296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.772501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.772539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.772817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.772852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.773065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.773101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.773326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.773361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.773640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.773675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.773882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.773927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.774215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.774249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.774458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.774492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.774775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.774810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.775005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.775041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.775345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.775380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.775576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.775611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.775893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.775937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.776229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.776264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.776464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.776499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.776707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.776742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.777038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.777080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.777355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.777390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.777669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.777704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.777991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.778027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.778302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.778337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.778803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.778844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.779074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.779113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.779316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.026 [2024-12-14 22:45:32.779352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.026 qpair failed and we were unable to recover it. 00:36:12.026 [2024-12-14 22:45:32.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.779848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.779882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.780167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.780203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.780479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.780514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.780742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.780778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.780971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.781007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.781319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.781355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.781552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.781587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.781732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.781766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.781961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.781999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.782136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.782171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.782321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.782356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.782631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.782667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.782970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.783006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.783282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.783317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.783514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.783547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.783837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.783873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.784166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.784201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.784467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.784501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.784620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.784654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.784938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.784976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.785131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.785166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.785350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.785385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.785676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.785711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.785930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.785967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.786260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.786294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.786571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.786605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.786889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.786933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.787211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.787245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.787546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.787581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.787779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.787814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.788069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.788105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.788401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.788436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.788721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.788963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.788999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.789200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.789234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.789448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.789482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.789755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.789789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.790102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.790137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.027 [2024-12-14 22:45:32.790400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.027 [2024-12-14 22:45:32.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.027 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.790655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.790690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.790962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.790998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.791135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.791170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.791366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.791401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.791654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.791688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.791939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.791975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.792278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.792314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.792619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.792653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.792939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.792976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.793271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.793306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.793580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.793616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.793795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.793829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.793973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.794009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.794235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.794271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.794436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.794471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.794748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.794783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.795023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.795059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.795361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.795395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.795682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.795718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.795996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.796032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.796354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.796626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.796681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.796826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.797165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.797202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.797483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.797518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.797732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.797766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.798044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.798080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.798302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.798443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.798478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.798756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.798791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.799109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.799146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.799370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.799404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.799671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.799706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.799838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.799873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.799940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b5e0 (9): Bad file descriptor 00:36:12.028 [2024-12-14 22:45:32.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.800492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.800778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.800816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.801056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.801094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.801396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.801431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.028 [2024-12-14 22:45:32.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.028 qpair failed and we were unable to recover it. 00:36:12.028 [2024-12-14 22:45:32.801925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.801962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.802252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.802543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.802915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.803042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.803076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.803330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.803364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.803644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.803678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.803974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.804010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.804244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.804279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.804406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.804439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.804640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.804675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.804963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.805186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.805221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.805501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.805536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.805671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.805704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.805912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.805947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.806262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.806296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.806558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.806591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.806874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.807188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.807223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.807502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.807537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.807822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.807861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.808115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.808449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.808484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.808784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.808818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.809118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.809154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.809411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.809445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.809696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.809730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.809954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.809989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.810137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.810171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.810366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.810401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.810663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.810696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.810926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.810962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.811236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.811271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.811555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.811589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.811901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.812125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.812160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.812434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.812467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.812749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.029 [2024-12-14 22:45:32.812784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.029 qpair failed and we were unable to recover it. 00:36:12.029 [2024-12-14 22:45:32.813087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.813125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.813318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.813353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.813604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.813638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.813769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.813803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.814101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.814137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.814417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.814729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.814764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.815055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.815090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.815387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.815421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.815686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.816048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.816320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.816355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.816621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.816656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.816938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.816973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.817195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.817231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.817448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.817483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.817666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.817701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.817885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.817931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.818205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.818239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.818505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.818539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.818764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.819057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.819094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.819276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.819316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.819614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.819823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.819857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.820051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.820087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.820282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.820317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.820500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.820534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.820757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.820791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.821100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.821136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.821414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.821448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.821671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.821941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.821976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.822181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.822215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.030 qpair failed and we were unable to recover it. 00:36:12.030 [2024-12-14 22:45:32.822469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.030 [2024-12-14 22:45:32.822503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.822627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.822662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.822867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.822912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.823120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.823154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.823406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.823745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.823779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.824053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.824090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.824345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.824379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.824653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.824688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.824875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.824917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.825222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.825256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.825510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.825545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.825809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.825843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.826136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.826171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.826456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.826489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.826771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.826807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.827115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.827152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.827350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.827583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.827618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.827889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.827943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.828212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.828246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.828430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.828464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.828744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.828778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.828988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.829024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.829251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.829285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.829512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.829547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.829691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.829726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.829869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.829911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.830190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.830230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.830531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.830565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.830846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.830879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.831091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.831127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.831276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.831310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.831564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.831714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.831748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.832048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.832084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.832386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.832421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.832678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.832712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.833003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.833039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.031 qpair failed and we were unable to recover it. 00:36:12.031 [2024-12-14 22:45:32.833309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.031 [2024-12-14 22:45:32.833344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.833487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.833521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.833734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.833769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.833978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.834015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.834269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.834303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.834603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.834637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.834933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.834969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.835254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.835288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.835583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.835617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.835851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.835886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.836049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.836084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.836235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.836269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.836483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.836516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.836782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.836816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.837020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.837057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.837257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.837291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.837493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.837529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.837821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.837855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.838092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.838126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.838276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.838310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.838562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.838597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.838834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.838869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.839077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.839113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.839370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.839405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.839613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.839647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.839893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.839936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.840152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.840186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.840443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.840673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.840707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.840927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.840968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.841248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.841281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.841548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.841583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.841880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.841926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.842124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.842159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.842409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.842443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.842628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.842663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.842944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.842981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.843244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.843278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.843595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.843629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.032 [2024-12-14 22:45:32.843921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.032 [2024-12-14 22:45:32.843957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.032 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.844150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.844186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.844525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.844758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.844792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.845000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.845036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.845263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.845294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.845568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.845598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.845863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.846199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.846230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.846419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.846449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.846628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.846657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.846834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.846863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.847004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.847036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.847238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.847268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.847478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.847510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.847708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.847986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.848019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.848358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.848644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.848675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.848956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.848991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.849223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.849255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.849526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.849558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.849887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.850158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.850191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.850403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.850434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.850697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.850730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.851005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.851039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.851224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.851260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.851469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.851502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.851795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.851828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.852050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.852370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.852404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.852634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.852667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.852892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.852938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.853060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.853400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.853433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.853650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.853684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.853893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.853937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.854130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.033 [2024-12-14 22:45:32.854164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.033 qpair failed and we were unable to recover it. 00:36:12.033 [2024-12-14 22:45:32.854393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.854427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.854726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.854759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.855059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.855344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.855378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.855588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.855622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.855861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.855895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.856113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.856148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.856364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.856644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.856677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.856886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.856935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.857189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.857224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.857482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.857516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.857818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.857855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.858064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.858099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.858375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.858410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.858699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.858733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.858888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.858936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.859217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.859518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.859553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.859850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.859884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.860146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.860181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.860382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.860416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.860687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.860721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.861020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.861056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.861261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.861295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.861554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.861588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.861776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.861811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.862090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.862126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.862340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.862374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.862578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.862612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.862831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.862864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.863074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.863116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.863305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.863340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.034 [2024-12-14 22:45:32.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.034 qpair failed and we were unable to recover it. 00:36:12.034 [2024-12-14 22:45:32.863790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.863825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.864026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.864061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.864265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.864299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.864505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.864539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.864800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.864834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.864981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.865017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.865210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.865246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.865442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.865477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.865660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.865694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.865972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.866008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.866291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.866326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.866517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.866552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.866832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.867083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.867119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.867349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.867384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.867516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.867551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.867749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.867783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.868005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.868041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.868263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.868298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.868548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.868582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.868836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.868870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.869075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.869110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.035 [2024-12-14 22:45:32.869245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.035 [2024-12-14 22:45:32.869280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.035 qpair failed and we were unable to recover it. 00:36:12.313 [2024-12-14 22:45:32.869562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.313 [2024-12-14 22:45:32.869599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.313 qpair failed and we were unable to recover it. 00:36:12.313 [2024-12-14 22:45:32.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.313 [2024-12-14 22:45:32.869845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.313 qpair failed and we were unable to recover it. 00:36:12.313 [2024-12-14 22:45:32.869988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.313 [2024-12-14 22:45:32.870024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.313 qpair failed and we were unable to recover it. 00:36:12.313 [2024-12-14 22:45:32.870308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.313 [2024-12-14 22:45:32.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.313 qpair failed and we were unable to recover it. 00:36:12.313 [2024-12-14 22:45:32.870612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.870646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.870863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.870897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.871116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.871151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.871430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.871465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.871736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.871771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.872007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.872045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.872192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.872227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.872546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.872580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.872854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.872889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.873095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.873131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.873432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.873472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.873726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.873760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.874047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.874083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.874358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.874392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.874519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.874554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.874847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.874880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.875086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.875122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.875356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.875391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.875695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.875729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.875987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.876023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.876304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.876339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.876548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.876582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.876835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.876870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.877179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.877215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.877451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.877486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.877632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.877667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.877946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.877984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.878285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.878320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.878551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.878585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.878838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.878873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.879183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.879218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.879513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.879547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.879757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.879792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.880073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.880110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.880316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.880351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.880536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.880570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.880772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.880807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.881026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.881062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.314 qpair failed and we were unable to recover it. 00:36:12.314 [2024-12-14 22:45:32.881262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.314 [2024-12-14 22:45:32.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.881506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.881541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.881730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.881764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.882029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.882066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.882210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.882244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.882430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.882464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.882717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.882752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.882980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.883015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.883203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.883237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.883440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.883474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.883656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.883690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.883965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.884001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.884273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.884314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.884502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.884536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.884823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.884857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.885090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.885126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.885315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.885349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.885632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.885667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.885853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.885888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.886224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.886457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.886492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.886762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.886797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.887074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.887110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.887379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.887415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.887649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.887683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.888002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.888220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.888254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.888368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.888404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.888657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.888691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.888922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.888958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.889238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.889273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.889497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.889531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.889783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.889818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.890000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.890037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.890225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.890259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.890462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.890497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.890706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.890740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.890946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.890982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.891261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.315 [2024-12-14 22:45:32.891296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.315 qpair failed and we were unable to recover it. 00:36:12.315 [2024-12-14 22:45:32.891519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.891558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.891743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.892079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.892382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.892417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.892597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.892632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.892854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.892889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.893034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.893069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.893351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.893386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.893630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.893861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.893895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.894180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.894215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.894495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.894530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.894751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.894785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.895069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.895104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.895347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.895530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.895564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.895686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.895718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.895925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.895961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.896164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.896198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.896380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.896414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.896561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.896596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.896875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.896921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.897192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.897226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.897357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.897391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.897672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.897707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.897970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.898007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.898323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.898357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.898636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.898671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.898953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.898988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.899270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.899304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.899580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.899615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.899900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.899946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.900192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.900227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.900506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.900541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.900818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.901139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.901175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.901377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.901412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.901597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.901630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.901934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.901970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.316 qpair failed and we were unable to recover it. 00:36:12.316 [2024-12-14 22:45:32.902274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.316 [2024-12-14 22:45:32.902309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.902576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.902616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.902895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.902942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.903176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.903212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.903405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.903439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.903629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.903664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.903863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.903897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.904138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.904405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.904439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.904638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.904673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.904861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.904896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.905105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.905141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.905360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.905580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.905615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.905892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.905954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.906203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.906238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.906449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.906483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.906650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.906836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.906871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.907164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.907200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.907410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.907648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.907683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.907879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.907925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.908114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.908149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.908445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.908480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.908745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.908780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.908993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.909030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.909336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.909370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.909655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.909691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.909968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.910004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.910214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.910249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.910436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.910470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.910654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.910688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.910973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.911010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.911304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.911560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.911687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.911722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.911924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.911961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.912239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.317 qpair failed and we were unable to recover it. 00:36:12.317 [2024-12-14 22:45:32.912456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.317 [2024-12-14 22:45:32.912491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.912692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.912727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.912932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.913204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.913239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.913516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.913550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.913779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.913814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.914038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.914075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.914291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.914325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.914627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.914661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.914942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.914978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.915259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.915294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.915477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.915511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.915780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.915815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.916092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.916128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.916412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.916445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.916738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.916973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.917009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.917242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.917276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.917565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.917600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.917874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.917922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.918126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.918161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.918290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.918324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.918537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.918573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.918764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.918798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.919079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.919115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.919428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.919463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.919707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.919741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.919976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.920013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.920289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.920520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.920555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.920739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.920773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.920994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.921034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.921317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.921358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.921631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.318 [2024-12-14 22:45:32.921665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.318 qpair failed and we were unable to recover it. 00:36:12.318 [2024-12-14 22:45:32.921794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.921828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.921974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.922011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.922315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.922349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.922600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.922634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.922889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.922937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.923220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.923254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.923538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.923572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.923846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.923880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.924171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.924213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.924475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.924511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.924711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.924745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.925012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.925048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.925325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.925359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.925559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.925593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.925867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.925901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.926127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.926162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.926462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.926496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.926712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.926964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.927000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.927305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.927339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.927526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.927562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.927778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.927812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.928021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.928057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.928282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.928318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.928516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.928550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.928846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.928880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.929166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.929200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.929478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.929513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.929795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.929830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.930130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.930167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.930416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.930450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.930749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.930784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.931008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.931045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.931301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.931335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.931528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.931563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.931842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.931877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.932195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.932403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.932437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.932723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.319 [2024-12-14 22:45:32.932757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.319 qpair failed and we were unable to recover it. 00:36:12.319 [2024-12-14 22:45:32.933031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.933068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.933354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.933389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.933579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.933612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.933818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.933853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.934054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.934089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.934310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.934344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.934543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.934803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.934837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.935021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.935056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.935240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.935281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.935485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.935519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.935727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.935761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.936035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.936071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.936400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.936434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.936652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.936686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.936954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.936990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.937184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.937218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.937471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.937505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.937722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.937757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.938022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.938059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.938205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.938240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.938458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.938491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.938677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.938711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.938975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.939013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.939217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.939252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.939459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.939493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.939794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.939827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.940015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.940052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.940264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.940299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.940571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.940605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.940882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.940925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.941125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.941159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.941412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.941447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.941643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.941676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.941947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.941983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.942268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.942303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.942546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.942580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.942783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.942818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.943099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.320 [2024-12-14 22:45:32.943135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.320 qpair failed and we were unable to recover it. 00:36:12.320 [2024-12-14 22:45:32.943418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.943453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.943774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.943808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.944039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.944075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.944352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.944385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.944664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.944699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.944984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.945021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.945165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.945198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.945398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.945432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.945690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.945811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.945844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.946216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.946260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.946549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.946583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.946732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.946766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.946977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.947014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.947267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.947301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.947425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.947459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.947731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.947766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.948034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.948070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.948368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.948403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.948674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.948708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.949016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.949052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.949258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.949292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.949500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.949534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.949653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.949687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.949887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.949932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.950212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.950246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.950429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.950463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.950726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.950761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.951004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.951278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.951312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.951522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.951556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.951754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.951788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.952049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.952085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.952316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.952350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.952534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.952568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.952837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.952870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.953181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.953217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.953369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.953403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.953663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.953698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.321 qpair failed and we were unable to recover it. 00:36:12.321 [2024-12-14 22:45:32.953989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.321 [2024-12-14 22:45:32.954025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.954314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.954349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.954617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.954652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.954946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.954983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.955223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.955257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.955507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.955541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.955751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.955785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.956062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.956099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.956352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.956387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.956686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.956719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.956921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.956957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.957144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.957185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.957411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.957445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.957661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.957844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.957878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.958024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.958059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.958356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.958390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.958587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.958622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.958924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.958960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.959245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.959280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.959668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.959701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.959958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.959994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.960203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.960238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.960381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.960414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.960697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.960732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.961030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.961066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.961330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.961365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.961664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.961698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.961978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.962015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.962217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.962251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.962504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.962539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.962840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.962874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.963069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.963103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.322 [2024-12-14 22:45:32.963357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.322 [2024-12-14 22:45:32.963393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.322 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.963676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.963711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.964010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.964046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.964309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.964343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.964548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.964583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.964766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.964800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.965052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.965088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.965368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.965403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.965678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.965713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.965990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.966025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.966209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.966243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.966501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.966535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.966831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.966866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.967176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.967213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.967437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.967471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.967743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.967777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.967986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.968021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.968318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.968358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.968637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.968672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.968954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.968990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.969174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.969209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.969472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.969507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.969731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.969765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.970042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.970078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.970281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.970316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.970574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.970608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.970817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.970852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.971005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.971041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.971237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.971272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.971394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.971428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.971714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.971749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.972036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.972289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.972323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.972626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.972661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.972786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.972821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.973049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.973085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.973360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.973394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.973677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.973712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 [2024-12-14 22:45:32.973924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.973960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.323 qpair failed and we were unable to recover it. 00:36:12.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 542305 Killed "${NVMF_APP[@]}" "$@" 00:36:12.323 [2024-12-14 22:45:32.974185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.323 [2024-12-14 22:45:32.974220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.974497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.974532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.974783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:12.324 [2024-12-14 22:45:32.974818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.975016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.975052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:12.324 [2024-12-14 22:45:32.975330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.975365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.324 [2024-12-14 22:45:32.975641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.975676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.324 [2024-12-14 22:45:32.975962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.975998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.976142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.976176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.324 [2024-12-14 22:45:32.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.976343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.976557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.976591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.976820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.976855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.977121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.977156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.977290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.977325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.977518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.977551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.977826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.977860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.978092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.978128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.978278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.978313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.978597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.978913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.978948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.979155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.979190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.979471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.979506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.979763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.979795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.980097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.980134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.980399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.980434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.980720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.980754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.980963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.980999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.981252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.981607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.981642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.981924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.981960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.982272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.982307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.982628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.982663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.982934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.982969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.983157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.983191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.983380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.983415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542998 00:36:12.324 [2024-12-14 22:45:32.983618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 [2024-12-14 22:45:32.983653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 [2024-12-14 22:45:32.983836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542998 00:36:12.324 [2024-12-14 22:45:32.983870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.324 qpair failed and we were unable to recover it. 00:36:12.324 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:12.325 [2024-12-14 22:45:32.984155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.984190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542998 ']' 00:36:12.325 [2024-12-14 22:45:32.984454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.984489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.325 [2024-12-14 22:45:32.984787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.984821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.325 [2024-12-14 22:45:32.985109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.985157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.325 [2024-12-14 22:45:32.985356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.985392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.985660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.325 [2024-12-14 22:45:32.985695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.985896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 22:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.325 [2024-12-14 22:45:32.986166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.986200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.986502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.986536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.986783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.986817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.986948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.986984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.987271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.987305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.987559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.987594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.987890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.987936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.988152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.988187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.988414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.988450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.988737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.988928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.988966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.989246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.989280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.989568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.989602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.989878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.989938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.990155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.990189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.990454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.990490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.990742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.990776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.991030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.991067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.991291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.991324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.991524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.991683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.991715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.991941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.991983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.992260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.992293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.992498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.992532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.992739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.992772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.992954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.992989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.993266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.993301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.993567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.993601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.993897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.325 [2024-12-14 22:45:32.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.325 qpair failed and we were unable to recover it. 00:36:12.325 [2024-12-14 22:45:32.994242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.994277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.994471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.994506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.994809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.994843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.995135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.995171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.995375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.995410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.995631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.995665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.995947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.995984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.996282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.996318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.996582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.996616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.996810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.997039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.997074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.997255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.997289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.997493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.997527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.997666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.997701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.997831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.997866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.998142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.998179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.998428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.998462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.998706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.998740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.998880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.998925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.999293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.999373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.999594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.999634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:32.999948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:32.999987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.000280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.000318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.000556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.000592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.000792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.000828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.001114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.001377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.001412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.001550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.001584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.001839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.001876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.002159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.002196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.002417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.002454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.002774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.002809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.003087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.003125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.003432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.003467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.003713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.003750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.003936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.003973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.004108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.004143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.004422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.004459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.004738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.004774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.326 [2024-12-14 22:45:33.005035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.326 [2024-12-14 22:45:33.005074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.326 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.005216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.005252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.005440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.005475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.005736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.005772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.005979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.006016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.006238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.006273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.006539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.006575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.006861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.006919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.007066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.007103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.007317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.007351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.007606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.007643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.007842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.007877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.008171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.008207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.008356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.008391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.008690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.008724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.008938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.008976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.009191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.009225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.009546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.009580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.009835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.010094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.010135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.010341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.010375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.010639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.010675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.010962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.010997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.011211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.011246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.011385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.011419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.011718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.011751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.011945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.011981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.012166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.012200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.012457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.012491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.012827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.012935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.013190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.013229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.013515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.013550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.013818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.014042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.014077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.014227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.014272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.327 qpair failed and we were unable to recover it. 00:36:12.327 [2024-12-14 22:45:33.014397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.327 [2024-12-14 22:45:33.014430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.014591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.014780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.014815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.015014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.015050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.015165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.015198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.015476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.015511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.015719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.015944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.015981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.016174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.016207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.016342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.016375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.016538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.016728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.016762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.017045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.017082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.017234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.017267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.017491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.017525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.017645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.017679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.017889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.017935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.018134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.018169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.018374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.018407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.018539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.018573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.018779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.018813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.019016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.019051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.019193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.019227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.019539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.019574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.019788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.019822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.019961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.019997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.020285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.020319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.020522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.020555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.020809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.020844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.020980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.021016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.021218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.021370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.021403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.021590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.021913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.021949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.022061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.022283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.022317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.022594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.022627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.022844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.022879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.023094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.023135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.328 [2024-12-14 22:45:33.023330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.328 [2024-12-14 22:45:33.023370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.328 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.023573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.023859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.023893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.024086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.024120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.024338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.024587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.024622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.024880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.024924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.025195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.025228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.025360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.025394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.025647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.025681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.025859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.025894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.026019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.026052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.026258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.026292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.026506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.026538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.026740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.026772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.026971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.027007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.027198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.027231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.027444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.027479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.027676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.027709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.027890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.027935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.028133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.028168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.028399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.028433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.028621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.028654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.028856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.028889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.029086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.029120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.029312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.029346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.029572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.029808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.029843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.030038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.030073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.030293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.030327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.030632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.030665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.030864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.030898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.031165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.031200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.031452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.031486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.031769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.031801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.032095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.032131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.032391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.032425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.032563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.032597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.032847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.032880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.329 [2024-12-14 22:45:33.033013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.329 [2024-12-14 22:45:33.033045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.329 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.033247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.033285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.033513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.033546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.033672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.033704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.033901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.033944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.034161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.034196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.034470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.034504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.034661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:12.330 [2024-12-14 22:45:33.034698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.034708] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.330 [2024-12-14 22:45:33.034730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.034850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.034882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.035148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.035428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.035460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.035670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.035884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.035928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.036132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.036173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.036374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.036600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.036632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.036822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.036855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.037079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.037115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.037310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.037344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.037498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.037533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.037725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.037758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.037961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.037997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.038192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.038225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.038418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.038453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.038580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.038614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.038760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.038944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.038979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.039209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.039243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.039433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.039467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.039723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.039757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.040025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.040061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.040273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.040306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.040504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.040538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.040743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.040777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.040977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.041212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.041244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.041381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.041614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.041649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.330 [2024-12-14 22:45:33.041787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.330 [2024-12-14 22:45:33.041820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.330 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.042007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.042042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.042237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.042311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.042457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.042502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.042744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.042887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.042937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.043145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.043179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.043378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.043411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.043614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.043648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.043830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.043882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.044161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.044196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.044443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.044650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.044684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.044941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.044976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.045170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.045203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.045455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.045488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.045632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.045667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.045791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.045825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.046025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.046060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.046310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.046344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.046494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.046750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.046784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.047037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.047073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.047198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.047231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.047429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.047463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.047722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.047757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.047876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.047925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.048139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.048174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.048308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.048342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.048567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.048607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.048811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.048845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.048990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.049026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.049339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.049373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.049628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.049662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.049844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.049877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.331 [2024-12-14 22:45:33.050052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.331 [2024-12-14 22:45:33.050086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.331 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.050339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.050373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.050552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.050586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.050716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.050749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.050945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.050982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.051234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.051268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.051397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.051431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.051649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.051683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.051887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.051930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.052147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.052182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.052374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.052409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.052597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.052632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.052931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.052966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.053233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.053267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.053467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.053502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.053698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.053731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.054024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.054059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.054261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.054434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.054467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.054607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.054641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.054822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.054855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.055060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.055101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.055285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.055319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.055445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.055478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.055666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.055700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.055818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.055852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.056128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.056164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.056275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.056310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.056485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.056771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.056926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.056960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.057160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.057194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.057416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.057450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.057654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.057687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.057850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.057988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.058029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.058220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.058506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.058539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.058737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.058771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.059049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.059085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.332 qpair failed and we were unable to recover it. 00:36:12.332 [2024-12-14 22:45:33.059264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.332 [2024-12-14 22:45:33.059297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.059516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.059739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.059772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.060003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.060038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.060239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.060421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.060455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.060641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.060675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.060860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.060893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.061102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.061144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.061402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.061437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.061629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.061662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.061790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.061825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.061935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.061971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.062164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.062198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.062476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.062510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.062712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.062747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.062862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.062895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.063180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.063495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.063529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.063756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.063789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.063927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.063963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.064186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.064221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.064431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.064465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.064712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.064745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.064989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.065025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.065161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.065329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.065549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.065583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.065781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.065814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.066059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.066094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.066344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.066379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.066507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.066541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.066750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.066998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.067034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.067303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.067562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.067604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.067789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.067824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.068071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.068107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.068302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.333 qpair failed and we were unable to recover it. 00:36:12.333 [2024-12-14 22:45:33.068610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.333 [2024-12-14 22:45:33.068642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.068778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.068812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.069003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.069038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.069343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.069654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.069790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.069823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.070001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.070037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.070325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.070359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.070553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.070587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.070766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.070806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.071033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.071070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.071356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.071390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.071654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.071687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.071879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.071923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.072111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.072145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.072279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.072312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.072487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.072520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.072780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.072813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.072994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.073173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.073207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.073396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.073430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.073560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.073593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.073774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.074084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.074119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.074294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.074327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.074515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.074549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.074679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.074712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.074922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.074956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.075151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.075183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.075359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.075391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.075566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.075600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.075847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.075880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.076166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.076200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.076470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.076502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.076689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.076722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.076947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.076992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.334 [2024-12-14 22:45:33.077123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.334 [2024-12-14 22:45:33.077170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.334 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.077449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.077486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.077760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.077794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.077990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.078025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.078266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.078300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.078409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.078443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.078635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.078670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.078885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.078931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.079059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.079092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.079288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.079321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.079539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.079572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.079693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.079727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.079927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.079964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.080128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.080273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.080418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.080632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.080840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.080966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.081001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.081119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.081151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.081421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.081454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.081652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.081686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.081894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.082029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.082063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.082339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.082469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.082503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.082645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.082680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.082816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.082850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.083059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.083094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.083287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.083330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.083441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.083476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.083727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.083759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.083868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.083922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.084129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.084163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.084298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.084334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.084508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.084540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.084810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.084843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.085140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.085176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.085357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.085390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.335 [2024-12-14 22:45:33.085684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.335 [2024-12-14 22:45:33.085718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.335 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.085929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.085973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.086243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.086277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.086421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.086455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.086631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.086857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.086891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.087082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.087292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.087325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.087500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.087533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.087731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.087764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.088040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.088178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.088393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.088648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.088853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.088998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.089033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.089209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.089242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.089419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.089453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.089661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.089697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.089923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.090120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.090153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.090341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.090375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.090535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.090727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.090761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.090949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.090984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.091293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.091498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.091532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.091672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.091855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.092153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.092189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.092368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.092401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.092529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.092562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.092771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.092804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.093070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.093104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.336 [2024-12-14 22:45:33.093268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.336 qpair failed and we were unable to recover it. 00:36:12.336 [2024-12-14 22:45:33.093462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.093495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.093703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.093920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.093955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.094223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.094273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.094384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.094417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.094520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.094553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.094776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.094810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.095061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.095095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.095299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.095598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.095631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.095812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.095845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.095960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.095992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.096186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.096220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.096411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.096444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.096576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.096610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.096826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.096859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.097111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.097146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.097327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.097361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.097479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.097512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.097772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.098017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.098053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.098225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.098404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.098436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.098585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:12.337 [2024-12-14 22:45:33.098612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.098857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.098891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.099092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.099126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.099343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.099378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.099501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.099534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.099727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.099761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.099880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.100111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.100338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.100370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.100477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.100511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.100684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.100717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.100899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.100942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.101386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.101420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.101594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.101626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.101800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.101835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.337 qpair failed and we were unable to recover it. 00:36:12.337 [2024-12-14 22:45:33.102025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.337 [2024-12-14 22:45:33.102061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.102276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.102309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.102430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.102463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.102657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.102691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.102884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.103174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.103208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.103394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.103427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.103548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.103582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.103824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.103869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.104080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.104121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.104320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.104354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.104606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.104640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.104899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.104941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.105063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.105340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.105375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.105616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.105651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.105836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.105870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.106098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.106140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.106270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.106307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.106601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.106635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.106847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.106882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.107183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.107217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.107420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.107454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.107586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.107620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.107813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.107848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.108071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.108105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.108426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.108619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.108653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.108867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.108908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.109105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.109140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.109259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.109294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.109542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.109577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.109838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.109870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.110070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.110113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.110237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.110271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.110517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.110568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.110703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.110737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.110954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.110988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.111168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.111202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.338 [2024-12-14 22:45:33.111390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.338 [2024-12-14 22:45:33.111431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.338 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.111540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.111573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.111821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.111858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.112113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.112148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.112277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.112310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.112447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.112481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.112669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.112704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.112931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.112966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.113207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.113240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.113366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.113399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.113682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.113716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.113922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.113957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.114130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.114444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.114679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.114713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.114846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.114879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.115005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.115039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.115155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.115189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.115299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.115331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.115523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.115556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.115815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.115849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.116036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.116071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.116189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.116222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.116454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.116500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.116614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.116648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.116780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.116815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.117048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.117264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.117432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.117648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.117854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.117994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.118027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.118217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.118250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.118370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.118404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.118581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.118824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.118858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.119074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.119109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.119256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.119291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.119480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.119515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.119786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.339 [2024-12-14 22:45:33.119823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.339 qpair failed and we were unable to recover it. 00:36:12.339 [2024-12-14 22:45:33.120018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.120318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.120352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.120568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.120606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.120771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.120964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.121002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.121126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.121169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.121357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.121594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.121630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.121773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.121808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.121978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.122013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.122193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.340 [2024-12-14 22:45:33.122227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.340 [2024-12-14 22:45:33.122235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.340 [2024-12-14 22:45:33.122231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.122243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.340 [2024-12-14 22:45:33.122250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.340 [2024-12-14 22:45:33.122263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.122460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.122493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.122681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.122713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.122844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.122877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.123100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.123250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.123412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.123800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 [2024-12-14 22:45:33.123716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.123826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:12.340 [2024-12-14 22:45:33.123949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:12.340 [2024-12-14 22:45:33.123951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:12.340 [2024-12-14 22:45:33.124078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.124112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.124350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.124385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.124653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.124688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.124920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.124957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.125216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.125252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.125466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.125499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.125629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.125662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.125796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.125831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.126023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.126059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.126299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.126333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.126519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.126554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.126736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.340 [2024-12-14 22:45:33.126769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.340 qpair failed and we were unable to recover it. 00:36:12.340 [2024-12-14 22:45:33.126890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.126934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.127169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.127203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.127394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.127622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.127811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.127854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.128058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.128093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.128271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.128304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.128479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.128513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.128719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.128752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.128871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.128914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.129882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.129925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.130064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.130098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.130358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.130392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.130565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.130600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.130841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.130876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.131020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.131054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.131320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.131354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.131598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.131632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.131894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.131940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.132074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.132108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.132304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.132337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.132512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.132719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.132754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.132944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.132980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.133300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.133353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.133491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.133534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.133786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.133820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.134210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.134243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.134461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.134495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.134689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.134722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.134918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.134952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.135144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.135178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.135384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.341 [2024-12-14 22:45:33.135419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.341 qpair failed and we were unable to recover it. 00:36:12.341 [2024-12-14 22:45:33.135626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.135661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.135852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.135886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.136037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.136071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.136186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.136227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.136416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.136448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.136662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.136696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.136832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.136866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.137143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.137177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.137301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.137336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.137546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.137584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.137701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.137733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.137998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.138034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.138303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.138337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.138460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.138495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.138687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.138721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.138927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.138964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.139117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.139250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.139411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.139445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.139570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.139601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.139776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.139811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.140008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.140042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.140250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.140283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.140396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.140429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.140562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.140598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.140784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.140819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.141004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.141039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.141254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.141289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.141422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.141473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.141671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.141711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.141921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.141964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.142158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.142434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.142467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.142657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.142690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.142972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.143007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.143217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.143250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.143375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.143630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.143663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.143855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.342 [2024-12-14 22:45:33.143889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.342 qpair failed and we were unable to recover it. 00:36:12.342 [2024-12-14 22:45:33.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.144138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.144348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.144380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.144576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.144607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.144869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.144911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.145171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.145215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.145354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.145387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.145558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.145591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.145830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.145864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.146117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.146151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.146351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.146393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.146698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.146732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.146941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.146975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.147168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.147211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.147422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.147592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.147625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.147887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.147934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.148057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.148089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.148270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.148302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.148576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.148612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.148743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.148881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.148927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.149103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.149139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.149259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.149293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.149413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.149445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.149709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.149950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.149984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.150205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.150239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.150362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.150649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.150683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.150898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.150945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.151202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.151432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.151475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.151707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.151885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.151937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.152129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.152164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.152302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.152494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.152527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.152742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.152776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.152898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.152944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.343 qpair failed and we were unable to recover it. 00:36:12.343 [2024-12-14 22:45:33.153087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.343 [2024-12-14 22:45:33.153120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.153290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.153332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.153522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.153556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.153797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.153831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.154080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.154116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.154258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.154312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.154421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.154456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.154649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.154683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.154856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.154888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.155957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.155992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.156118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.156151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.156330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.156366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.156547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.156586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.156779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.156812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.156992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.157027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.157272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.157311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.157523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.157559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.157690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.157726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.157901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.157951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.158190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.158224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.158425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.158460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.158713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.158751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.158927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.158963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.159148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.159183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.159371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.159406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.159615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.159651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.159831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.160033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.160097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.160239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.160273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.160520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.160553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.160732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.160765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.160915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.160950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.344 qpair failed and we were unable to recover it. 00:36:12.344 [2024-12-14 22:45:33.161182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.344 [2024-12-14 22:45:33.161222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.161419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.161455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.161628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.161660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.161846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.161879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.162065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.162098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.162217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.162250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.162511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.162544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.162737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.162770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.162875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.162920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.163120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.163154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.163353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.163386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.163497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.163529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.163740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.163777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.164018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.164053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.164247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.164281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.164463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.164497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.164668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.164701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.164891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.164937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.165180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.165215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.165502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.165535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.165712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.165748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.165876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.165919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.166166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.166205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.166395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.166429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.166692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.166726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.166857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.166891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.167037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.167073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.167212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.167430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.167463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.167725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.167964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.345 [2024-12-14 22:45:33.168001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.345 qpair failed and we were unable to recover it. 00:36:12.345 [2024-12-14 22:45:33.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.168212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.168344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.168382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.168500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.168535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.168741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.168951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.168991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.169176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.169224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.169350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.169385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.169579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.169615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.169754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.170029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.170067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.170317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.170354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.170462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.170495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.170692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.170727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.170920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.171081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.171115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.171216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.171249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.171358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.171392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.171683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.171716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.171915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.171951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.172063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.172098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.172318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.172351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.172588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.172622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.172832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.172865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.173022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.173062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.173251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.173285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.173472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.173507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.173785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.173939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.174242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.174276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.174500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.174533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.174653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.174686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.174874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.174918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.175934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.175970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.176113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.176148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.346 [2024-12-14 22:45:33.176340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.346 [2024-12-14 22:45:33.176375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.346 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.176577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.176611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.176900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.176947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.177122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.177156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.177277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.177311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.177500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.177533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.177724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.177756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.177872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.177915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.178137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.178171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.178306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.178339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.178484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.178518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.347 [2024-12-14 22:45:33.178635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.347 [2024-12-14 22:45:33.178670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.347 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.178932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.178975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.179101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.179137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.179355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.179391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.179603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.179637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.179901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.179947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.180213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.180252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.180536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.617 [2024-12-14 22:45:33.180573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.617 qpair failed and we were unable to recover it. 00:36:12.617 [2024-12-14 22:45:33.180791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.180829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.180971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.181011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.181203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.181241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.181510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.181550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.181674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.181709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.181854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.181886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.182087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.182121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.182225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.182259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.182445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.182479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.182749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.182783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.182899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.182947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.183161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.183195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.183381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.183414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.183612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.183645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.183771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.183810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.183941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.183976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.184183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.184217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.184388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.184422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.184613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.184647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.184831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.184864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.185121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.185156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.185342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.185377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.185673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.185706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.185816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.185849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.186035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.186069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.186332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.186365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.186560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.186832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.186865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.187181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.187237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.187437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.187470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.187707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.187739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.187949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.187984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.188245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.188278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.188521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.188554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.188761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.188794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.188985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.189020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.189313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.189599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.618 qpair failed and we were unable to recover it. 00:36:12.618 [2024-12-14 22:45:33.189914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.618 [2024-12-14 22:45:33.189950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.190149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.190181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.190428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.190461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.190747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.190794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.190990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.191025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.191152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.191184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.191366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.191399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.191581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.191612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.191748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.191779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.191980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.192017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.192284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.192317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.192509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.192542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.192712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.192744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.193044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.193080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.193276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.193309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.193569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.193603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.193811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.193851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.194040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.194075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.194324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.194357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.194616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.194649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.194939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.194974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.195112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.195144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.195381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.195414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.195600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.195633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.195825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.196084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.196118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.196248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.196280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.196433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.196678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.196711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.196925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.196960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.197225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.197259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.197440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.197473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.197718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.197752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.197963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.197996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.198265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.198298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.198627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.198887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.198930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.199206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.199239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.199513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.199545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.619 [2024-12-14 22:45:33.199733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.619 [2024-12-14 22:45:33.199767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.619 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.200025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.200234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.200267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.200506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.200539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.200806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.200853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.201091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.201127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.201248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.201535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.201568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.201857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.201890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.202110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.202144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.202348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.202381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.202618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.202651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.202811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.203073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.203286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.203319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.203509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.203542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.203807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.203840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.203954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.203996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.204173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.204206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.204479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.204512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.204740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.204773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.204967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.205001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.205190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.205223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.205494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.205758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.205791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.205965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.206000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.206188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.206222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.206483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.206516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.206696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.206729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.206965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.207173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.207396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.207692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.207725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.207844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.207877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.208148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.208183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.208387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.208665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.208698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.208964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.208999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.209108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.209140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.620 [2024-12-14 22:45:33.209325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.620 [2024-12-14 22:45:33.209359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.620 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.209486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.209518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.209786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.209820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.210111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.210336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.210369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.210547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.210585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.210845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.210879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.211020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.211054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.211272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.211420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.211451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.211577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.211610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.211800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.211833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.212072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.212107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.212368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.212401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.212862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.212896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.213099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.213133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.213304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.213338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.213534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.213566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.213812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.213845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.214100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.214135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.214270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.214465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.214499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.214622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.214655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.214781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.214813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.215046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.215080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.215336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.215369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.215555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.215589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.215763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.215796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.216013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.216048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.216231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.216264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.216467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.216500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.216786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.216820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.216955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.216990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.217228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.217260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.217489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.217523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.217715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.217749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.217974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.218211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.218244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.621 [2024-12-14 22:45:33.218479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.621 [2024-12-14 22:45:33.218513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.621 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.218775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.218808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.219102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.219137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.219322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.219355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.219623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.219657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.219947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.219983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.220244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.220282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.220409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.220442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.220707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.220741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.221005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.221038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.221274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.221308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.221515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.221549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.221793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.222089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.222265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.222299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.222562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.222596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.222867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.222901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.223181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.223478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.223512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.223593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.223747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.223762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.622 [2024-12-14 22:45:33.223916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.224180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:12.622 [2024-12-14 22:45:33.224196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.224361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.224509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.224526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:12.622 [2024-12-14 22:45:33.224667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.224682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.224830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.224848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.622 [2024-12-14 22:45:33.225069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.225086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.622 [2024-12-14 22:45:33.225292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.225308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.225444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.225461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.225601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.225616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.225846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.622 [2024-12-14 22:45:33.225863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.622 qpair failed and we were unable to recover it. 00:36:12.622 [2024-12-14 22:45:33.226064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.226269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.226514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.226769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.226883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.226898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.227852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.227868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.228952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.228968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.229969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.229985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.230839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.230855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.231026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.231130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.231233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.231382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.623 [2024-12-14 22:45:33.231567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.623 qpair failed and we were unable to recover it. 00:36:12.623 [2024-12-14 22:45:33.231646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.231660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.231795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.231811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.231895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.232900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.232934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.233998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.234856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.234878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.235949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.235969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.236833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.236851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.237014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.237035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.237188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.237208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.624 [2024-12-14 22:45:33.237310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.624 [2024-12-14 22:45:33.237330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.624 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.237416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.237673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.237693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.237789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.237808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.237901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.237993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.238893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.238919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.239885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.239917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.240934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.240955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.241978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.242809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.625 qpair failed and we were unable to recover it. 00:36:12.625 [2024-12-14 22:45:33.242975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.625 [2024-12-14 22:45:33.243002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.243085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.243110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.243217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.243524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.243627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.243651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.243912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.243940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.244055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.244256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.244578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.244757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.244996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.245026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.245200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.245230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.245342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.245366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.245566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.245697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.245722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.245943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.246144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.246279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.246487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.246692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.246947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.246983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.247114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.247148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.247319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.247352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.247607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.247639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.247745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.247778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.247980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.248218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.248356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.248501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.248755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.248919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.248953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.249190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.249223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.249466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.249499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.249666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.626 [2024-12-14 22:45:33.249700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.626 qpair failed and we were unable to recover it. 00:36:12.626 [2024-12-14 22:45:33.249824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.249858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.249980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.250890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.250926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.251933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.251959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.252855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.252880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.253950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.253986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.254208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.254241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.254433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.254567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.254601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.254715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.254881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.254932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.255107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.255139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.255259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.255291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.255514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.255546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.255662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.255695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.255875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.255922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.256034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.256067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.256190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.256223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.627 [2024-12-14 22:45:33.256344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.627 [2024-12-14 22:45:33.256376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.627 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.256493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.256526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.256631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.256664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.256863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.256896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.257036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.257070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.257244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.257278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.257453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.257486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.257684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.257717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.257825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.257857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.258076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.258116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.258313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.258345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.258566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.258600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.258771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.258804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.258995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.259030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.628 [2024-12-14 22:45:33.259245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.259281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:12.628 [2024-12-14 22:45:33.259472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.259507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.259685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.628 [2024-12-14 22:45:33.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.259828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.259862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.628 [2024-12-14 22:45:33.260011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.260045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.260233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.260266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.260438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.260471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.260718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.260751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.260934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.260968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.261080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.261113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.261324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.261493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.261526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.261785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.261818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.261943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.261977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.262111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.262144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.262404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.262437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.262730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.262762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.262895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.262940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.263151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.263185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.263330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.263363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.263600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.263634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.263820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.264054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.264088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.264287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.628 [2024-12-14 22:45:33.264320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.628 qpair failed and we were unable to recover it. 00:36:12.628 [2024-12-14 22:45:33.264558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.264592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.264855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.264887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.265124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.265158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.265287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.265321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.265563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.265596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.265836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.265869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a94000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.266095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.266135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.266330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.266363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.266595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.266630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.266870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.266921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.267098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.267338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.267371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.267500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.267535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.267815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.267850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.268002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.268036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.268215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.268395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.268428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.268716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.268753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.268970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.269005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.269110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.269143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.269337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.269383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.269580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.269615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.269784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.269822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.270058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.270271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.270303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.270450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.270483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.270751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.270785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.271031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.271065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.271180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.271212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.271425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.271460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.271745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.271778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.271972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.272008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.272139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.272172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.272365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.272397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.272585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.272618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.272788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.272821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.273085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.273125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.273232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.273265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.273405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.273439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.629 [2024-12-14 22:45:33.273693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.629 [2024-12-14 22:45:33.273727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.629 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.273935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.273970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.274185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.274218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.274397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.274430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.274568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.274805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.274838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.274987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.275022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.275201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.275234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.275352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.275384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.275515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.275549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.275799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.275832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.276101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.276136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.276329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.276362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.276584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.276617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.276832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.276865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.277148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.277182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.277456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.277488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.280108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.280147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.280340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.280373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.280610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.280644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.280921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.280956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.281160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.281401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.281433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.281702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.281737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.282027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.282063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.282247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.282283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.282478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.282511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.282696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.282730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.282882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.282927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.283193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.283432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.283464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.283592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.283626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.283830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.283863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.284085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.284119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.284301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.284336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.284600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.284634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.284842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.284875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.285158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.285199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.285406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.285441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.630 [2024-12-14 22:45:33.285667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.630 [2024-12-14 22:45:33.285701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.630 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.285947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.286075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.286108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.286295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.286329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.286565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.286822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.287003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.287039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.287280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.287314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.287513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.287549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.287763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.287797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.287922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.287957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.288219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.288255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.288503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.288536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.288792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.288825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.289109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.289144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.289363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.289395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.289528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.289561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.289776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.289809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 Malloc0 00:36:12.631 [2024-12-14 22:45:33.290096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.290151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.290383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.290718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.290761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.631 [2024-12-14 22:45:33.291007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.291058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.291347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:12.631 [2024-12-14 22:45:33.291396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.291699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.291747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.631 [2024-12-14 22:45:33.291990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.292037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.631 [2024-12-14 22:45:33.292323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.292378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.292697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.293021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.293056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.293199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.293232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.293429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.293462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.293722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.293755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.294046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.294081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.294299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.294333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.631 qpair failed and we were unable to recover it. 00:36:12.631 [2024-12-14 22:45:33.294506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.631 [2024-12-14 22:45:33.294539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.294777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.294810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.295120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.295157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.295401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.295443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.295744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.295777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.296012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.296047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.296293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.296325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.296528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.296564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.296673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.296707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.296972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.297006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.297208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.297241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.297385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.297553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.632 [2024-12-14 22:45:33.297681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.297715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.298000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.298035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.298226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.298259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.298380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.298413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.298627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.298667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.298885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.298931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.299196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.299229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.299411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.299444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.299710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.299742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.299978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.300013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.300196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.300229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.300445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.300477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.300591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.300624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.300859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.300892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.301155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.301188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.301399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.301432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.301850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.301883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.302084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.302118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.302309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.302341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.302625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.302658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.302789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.302822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.303111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.303144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.303336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.303369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.303575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.303608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.303872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.303915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.304107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.632 [2024-12-14 22:45:33.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.632 qpair failed and we were unable to recover it. 00:36:12.632 [2024-12-14 22:45:33.304314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.304347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.304522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.304555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.304822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.304854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.305037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.305072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1aa0000b90 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.305391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.305449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.305662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.305887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.305934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.306063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.306097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.633 [2024-12-14 22:45:33.306314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.306348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:12.633 [2024-12-14 22:45:33.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.306648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.306896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.307125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.307159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.633 [2024-12-14 22:45:33.307380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.307414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.307701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.307734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.307929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.307963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.308093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.308127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.308319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.308353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.308624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.308656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.308921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.308956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.309157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.309192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.309385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.309419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.309696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.309729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.309871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.309914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.310102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.310135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.310351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.310386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.310645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.310679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.310921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.310956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.311205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.311376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.311408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.311666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.311706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.312009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.312307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.312342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.312618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.312652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.312832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.312866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.313156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.313448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.313481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.313672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 [2024-12-14 22:45:33.313880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.633 [2024-12-14 22:45:33.313927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.633 qpair failed and we were unable to recover it. 00:36:12.633 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.634 [2024-12-14 22:45:33.314191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.314488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.314522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.314727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.314761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.634 [2024-12-14 22:45:33.315010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.315045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.634 [2024-12-14 22:45:33.315250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.315546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.315579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.315757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.315791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.315972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.316007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.316146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.316181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.316390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.316611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.316644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.316819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.316852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.317071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.317288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.317321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.317523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.317556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.317735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.317768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.318046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.318082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.318338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.318371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.318566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.318599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.318786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.318819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.319067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.319103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.319369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.319401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.319576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.319609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.319872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.319921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.320182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.320215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.320429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.320463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.320578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.320611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.320849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.320882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.321144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.321179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.321422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.321455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.321566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.321604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.321872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.321922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.634 [2024-12-14 22:45:33.322189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.322222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 [2024-12-14 22:45:33.322458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.322494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:12.634 [2024-12-14 22:45:33.322733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.322766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.634 [2024-12-14 22:45:33.322951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.322987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.634 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.634 [2024-12-14 22:45:33.323230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.634 [2024-12-14 22:45:33.323264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.634 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.323457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.323491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.323689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.323722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.323850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.323884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.324070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.324346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.324379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd6a0 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.324677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.324748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.324952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.324990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.325228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.325439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.325472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.325695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.635 [2024-12-14 22:45:33.325728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a98000b90 with addr=10.0.0.2, port=4420 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.325768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.635 [2024-12-14 22:45:33.328258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.328386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.328429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.328452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.328472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.328525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.635 [2024-12-14 22:45:33.338174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.635 [2024-12-14 22:45:33.338268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.338303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.338323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.338339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.338380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 22:45:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 542328 00:36:12.635 [2024-12-14 22:45:33.348163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.348250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.348274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.348287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.348299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.348326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.358173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.358250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.358268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.358277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.358286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.358305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.368165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.368224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.368238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.368246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.368252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.368268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.378133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.378204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.378218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.378225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.378231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.378246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.388137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.388192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.388206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.388213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.635 [2024-12-14 22:45:33.388219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.635 [2024-12-14 22:45:33.388234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.635 qpair failed and we were unable to recover it. 00:36:12.635 [2024-12-14 22:45:33.398191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.635 [2024-12-14 22:45:33.398250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.635 [2024-12-14 22:45:33.398263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.635 [2024-12-14 22:45:33.398269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.398276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.398291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.408177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.408230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.408244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.408251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.408257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.408272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.418273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.418332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.418345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.418352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.418359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.418375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.428278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.428331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.428347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.428354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.428361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.428376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.438291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.438346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.438360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.438367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.438373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.438388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.448257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.448312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.448326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.448334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.448340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.448356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.458301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.458352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.458366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.458373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.458380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.458396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.468375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.468432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.468446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.468453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.468463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.468478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.478343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.478399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.478413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.478420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.478426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.478441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.636 [2024-12-14 22:45:33.488435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.636 [2024-12-14 22:45:33.488496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.636 [2024-12-14 22:45:33.488509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.636 [2024-12-14 22:45:33.488516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.636 [2024-12-14 22:45:33.488522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.636 [2024-12-14 22:45:33.488536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.636 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.498470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.897 [2024-12-14 22:45:33.498527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.897 [2024-12-14 22:45:33.498540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.897 [2024-12-14 22:45:33.498547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.897 [2024-12-14 22:45:33.498553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.897 [2024-12-14 22:45:33.498568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.897 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.508426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.897 [2024-12-14 22:45:33.508480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.897 [2024-12-14 22:45:33.508494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.897 [2024-12-14 22:45:33.508502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.897 [2024-12-14 22:45:33.508509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.897 [2024-12-14 22:45:33.508523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.897 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.518463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.897 [2024-12-14 22:45:33.518525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.897 [2024-12-14 22:45:33.518539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.897 [2024-12-14 22:45:33.518546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.897 [2024-12-14 22:45:33.518552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.897 [2024-12-14 22:45:33.518566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.897 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.528495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.897 [2024-12-14 22:45:33.528552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.897 [2024-12-14 22:45:33.528565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.897 [2024-12-14 22:45:33.528572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.897 [2024-12-14 22:45:33.528578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.897 [2024-12-14 22:45:33.528593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.897 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.538531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.897 [2024-12-14 22:45:33.538585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.897 [2024-12-14 22:45:33.538598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.897 [2024-12-14 22:45:33.538604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.897 [2024-12-14 22:45:33.538611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.897 [2024-12-14 22:45:33.538626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.897 qpair failed and we were unable to recover it. 00:36:12.897 [2024-12-14 22:45:33.548549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.548603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.548617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.548624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.548630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.548645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.558588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.558647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.558664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.558671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.558677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.558692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.568627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.568704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.568718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.568726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.568733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.568747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.578709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.578765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.578778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.578784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.578791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.578807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.588651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.588703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.588717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.588725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.588731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.588746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.598688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.598743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.598757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.598765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.598774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.598789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.608731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.608835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.608849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.608856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.608863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.608878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.618801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.618855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.618868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.618875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.618881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.618896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.628832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.628885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.628899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.628909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.628916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.628931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.638886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.638952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.638966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.638973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.638980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.638995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.648905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.648964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.648979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.648986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.648993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.649009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.658867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.658923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.658938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.658944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.658950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.658965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.668931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.668982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.668996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.669003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.898 [2024-12-14 22:45:33.669009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.898 [2024-12-14 22:45:33.669025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.898 qpair failed and we were unable to recover it. 00:36:12.898 [2024-12-14 22:45:33.678945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.898 [2024-12-14 22:45:33.679004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.898 [2024-12-14 22:45:33.679017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.898 [2024-12-14 22:45:33.679024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.679031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.679046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.689022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.689077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.689092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.689098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.689105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.689120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.699037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.699087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.699101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.699108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.699114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.699129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.709050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.709097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.709110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.709117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.709123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.709138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.719106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.719171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.719184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.719191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.719197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.719212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.729060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.729115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.729128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.729138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.729144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.729159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.739161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.739219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.739232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.739239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.739244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.739259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.749101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.749199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.749214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.749222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.749229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.749244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.759136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.759194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.759208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.759216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.759223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.759240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.769248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.769319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.769334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.769341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.769348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.769366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:12.899 [2024-12-14 22:45:33.779272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.899 [2024-12-14 22:45:33.779330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.899 [2024-12-14 22:45:33.779342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.899 [2024-12-14 22:45:33.779350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.899 [2024-12-14 22:45:33.779356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:12.899 [2024-12-14 22:45:33.779371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.899 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.789287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.789344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.789357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.789365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.789372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.789387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.799316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.799372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.799385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.799391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.799398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.799413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.809341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.809395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.809408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.809415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.809421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.809436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.819373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.819432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.819445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.819452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.819459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.819474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.829396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.829453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.829465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.829471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.829478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.829493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.839425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.839482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.839496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.839502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.839509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.839524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.849465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.849521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.849536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.849542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.849549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.849564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.160 [2024-12-14 22:45:33.859488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.160 [2024-12-14 22:45:33.859542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.160 [2024-12-14 22:45:33.859556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.160 [2024-12-14 22:45:33.859566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.160 [2024-12-14 22:45:33.859573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.160 [2024-12-14 22:45:33.859589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.160 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.869504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.869555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.869570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.869576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.869583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.869598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.879542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.879598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.879611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.879618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.879624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.879640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.889617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.889722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.889736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.889743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.889750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.889765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.899591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.899641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.899654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.899661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.899667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.899685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.909632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.909686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.909699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.909706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.909713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.909728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.919658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.919716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.919729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.919736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.919742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.919757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.929665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.929713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.929727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.929734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.929740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.929754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.939706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.939769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.939782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.939789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.939795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.939810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.949670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.949730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.949744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.949751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.949758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.949773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.959770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.959825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.959839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.959846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.959852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.959868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.969808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.969866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.969880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.969887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.969894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.969913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.979833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.979889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.979905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.979912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.979918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.979934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.989857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.989912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.161 [2024-12-14 22:45:33.989929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.161 [2024-12-14 22:45:33.989936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.161 [2024-12-14 22:45:33.989942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.161 [2024-12-14 22:45:33.989956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.161 qpair failed and we were unable to recover it. 00:36:13.161 [2024-12-14 22:45:33.999896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.161 [2024-12-14 22:45:33.999957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.162 [2024-12-14 22:45:33.999971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.162 [2024-12-14 22:45:33.999977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.162 [2024-12-14 22:45:33.999984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.162 [2024-12-14 22:45:33.999999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.162 qpair failed and we were unable to recover it. 00:36:13.162 [2024-12-14 22:45:34.009925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.162 [2024-12-14 22:45:34.009984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.162 [2024-12-14 22:45:34.009998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.162 [2024-12-14 22:45:34.010004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.162 [2024-12-14 22:45:34.010011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.162 [2024-12-14 22:45:34.010026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.162 qpair failed and we were unable to recover it. 00:36:13.162 [2024-12-14 22:45:34.019869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.162 [2024-12-14 22:45:34.019921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.162 [2024-12-14 22:45:34.019942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.162 [2024-12-14 22:45:34.019948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.162 [2024-12-14 22:45:34.019954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.162 [2024-12-14 22:45:34.019969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.162 qpair failed and we were unable to recover it. 00:36:13.162 [2024-12-14 22:45:34.030024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.162 [2024-12-14 22:45:34.030075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.162 [2024-12-14 22:45:34.030088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.162 [2024-12-14 22:45:34.030095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.162 [2024-12-14 22:45:34.030105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.162 [2024-12-14 22:45:34.030120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.162 qpair failed and we were unable to recover it. 00:36:13.162 [2024-12-14 22:45:34.040003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.162 [2024-12-14 22:45:34.040061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.162 [2024-12-14 22:45:34.040075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.162 [2024-12-14 22:45:34.040082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.162 [2024-12-14 22:45:34.040088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.162 [2024-12-14 22:45:34.040103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.162 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.050001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.050063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.050077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.050085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.050092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.050107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.060063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.060115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.060130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.060136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.060142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.060158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.070123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.070177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.070191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.070198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.070206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.070220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.080123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.080178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.080191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.080198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.080205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.080219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.090069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.090126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.090139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.090145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.090152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.090166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.100187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.100243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.100257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.100264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.100271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.100286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.110280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.423 [2024-12-14 22:45:34.110335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.423 [2024-12-14 22:45:34.110348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.423 [2024-12-14 22:45:34.110355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.423 [2024-12-14 22:45:34.110361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.423 [2024-12-14 22:45:34.110376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.423 qpair failed and we were unable to recover it. 00:36:13.423 [2024-12-14 22:45:34.120228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.120286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.120303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.120310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.120317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.120332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.130350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.130425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.130440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.130446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.130454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.130468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.140315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.140372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.140386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.140393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.140399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.140414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.150351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.150407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.150421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.150428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.150435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.150450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.160410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.160466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.160480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.160487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.160495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.160511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.170365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.170420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.170433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.170440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.170447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.170463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.180394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.180447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.180460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.180467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.180474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.180489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.190423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.190479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.190492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.190499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.190505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.190520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.200448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.200506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.200519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.200526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.200532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.200548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.210506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.210566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.210579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.210586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.210592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.210608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.220506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.220559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.220572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.220579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.220584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.220600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.230574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.230620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.230633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.230640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.230646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.230662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.240581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.240638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.240651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.240658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.424 [2024-12-14 22:45:34.240664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.424 [2024-12-14 22:45:34.240679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.424 qpair failed and we were unable to recover it. 00:36:13.424 [2024-12-14 22:45:34.250617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.424 [2024-12-14 22:45:34.250690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.424 [2024-12-14 22:45:34.250704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.424 [2024-12-14 22:45:34.250711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.250716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.250734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.425 [2024-12-14 22:45:34.260627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.425 [2024-12-14 22:45:34.260681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.425 [2024-12-14 22:45:34.260695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.425 [2024-12-14 22:45:34.260702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.260708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.260724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.425 [2024-12-14 22:45:34.270632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.425 [2024-12-14 22:45:34.270687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.425 [2024-12-14 22:45:34.270700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.425 [2024-12-14 22:45:34.270706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.270713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.270728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.425 [2024-12-14 22:45:34.280688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.425 [2024-12-14 22:45:34.280746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.425 [2024-12-14 22:45:34.280760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.425 [2024-12-14 22:45:34.280767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.280773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.280788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.425 [2024-12-14 22:45:34.290720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.425 [2024-12-14 22:45:34.290774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.425 [2024-12-14 22:45:34.290788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.425 [2024-12-14 22:45:34.290803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.290810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.290825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.425 [2024-12-14 22:45:34.300749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.425 [2024-12-14 22:45:34.300801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.425 [2024-12-14 22:45:34.300816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.425 [2024-12-14 22:45:34.300824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.425 [2024-12-14 22:45:34.300831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.425 [2024-12-14 22:45:34.300847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.425 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.310709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.310762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.310776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.310782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.310788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.310804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.320741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.320796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.320811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.320817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.320823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.320839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.330857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.330929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.330943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.330950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.330957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.330976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.340836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.340889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.340909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.340916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.340923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.340937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.350892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.350948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.350962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.350969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.350975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.350991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.360926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.360994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.361033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.361044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.361051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.361078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.370947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.371001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.371015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.371021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.371028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.371044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.380981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.381039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.381054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.686 [2024-12-14 22:45:34.381061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.686 [2024-12-14 22:45:34.381068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.686 [2024-12-14 22:45:34.381083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.686 qpair failed and we were unable to recover it. 00:36:13.686 [2024-12-14 22:45:34.390999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.686 [2024-12-14 22:45:34.391052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.686 [2024-12-14 22:45:34.391066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.391072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.391078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.391095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.401019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.401076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.401089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.401096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.401102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.401116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.410995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.411054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.411068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.411075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.411082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.411096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.421026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.421086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.421106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.421113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.421119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.421135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.431131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.431182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.431195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.431202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.431209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.431224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.441199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.441253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.441267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.441274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.441280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.441295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.451190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.451247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.451262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.451270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.451277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.451291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.461202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.461256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.461269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.461276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.461283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.461301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.471245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.471292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.471306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.471313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.471318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.471333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.481228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.481284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.481297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.481304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.481310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.481325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.491338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.491392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.491405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.491412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.491418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.491433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.501333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.501386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.501399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.501406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.501412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.501426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.511399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.511454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.511467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.511474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.511481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.511496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.687 qpair failed and we were unable to recover it. 00:36:13.687 [2024-12-14 22:45:34.521398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.687 [2024-12-14 22:45:34.521475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.687 [2024-12-14 22:45:34.521490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.687 [2024-12-14 22:45:34.521496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.687 [2024-12-14 22:45:34.521503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.687 [2024-12-14 22:45:34.521518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.688 qpair failed and we were unable to recover it. 00:36:13.688 [2024-12-14 22:45:34.531455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.688 [2024-12-14 22:45:34.531519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.688 [2024-12-14 22:45:34.531533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.688 [2024-12-14 22:45:34.531540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.688 [2024-12-14 22:45:34.531546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.688 [2024-12-14 22:45:34.531561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.688 qpair failed and we were unable to recover it. 00:36:13.688 [2024-12-14 22:45:34.541443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.688 [2024-12-14 22:45:34.541495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.688 [2024-12-14 22:45:34.541508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.688 [2024-12-14 22:45:34.541515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.688 [2024-12-14 22:45:34.541523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.688 [2024-12-14 22:45:34.541537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.688 qpair failed and we were unable to recover it. 00:36:13.688 [2024-12-14 22:45:34.551473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.688 [2024-12-14 22:45:34.551525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.688 [2024-12-14 22:45:34.551542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.688 [2024-12-14 22:45:34.551550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.688 [2024-12-14 22:45:34.551556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.688 [2024-12-14 22:45:34.551571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.688 qpair failed and we were unable to recover it. 00:36:13.688 [2024-12-14 22:45:34.561509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.688 [2024-12-14 22:45:34.561566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.688 [2024-12-14 22:45:34.561580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.688 [2024-12-14 22:45:34.561587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.688 [2024-12-14 22:45:34.561593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.688 [2024-12-14 22:45:34.561609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.688 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.571538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.571595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.571609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.571616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.571623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.571638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.581557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.581606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.581619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.581626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.581633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.581647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.591592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.591654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.591666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.591674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.591683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.591699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.601635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.601693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.601707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.601713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.601720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.601735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.611689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.611756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.611770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.611777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.611784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.611799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.621682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.621737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.621751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.621757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.621763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.621778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.631633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.631693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.631706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.631714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.631720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.631735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.641746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.641817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.641832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.641840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.641847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.641862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.651775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.651832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.651846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.651853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.651858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.651874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.661715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.661771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.661786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.949 [2024-12-14 22:45:34.661792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.949 [2024-12-14 22:45:34.661799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.949 [2024-12-14 22:45:34.661815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 22:45:34.671823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.949 [2024-12-14 22:45:34.671878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.949 [2024-12-14 22:45:34.671892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.671899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.671910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.671925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.681854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.681915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.681932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.681939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.681945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.681961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.691884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.691950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.691964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.691971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.691978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.691993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.701907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.701955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.701969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.701975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.701981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.701997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.711928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.711977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.711991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.711997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.712004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.712019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.721974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.722028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.722042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.722052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.722058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.722074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.732011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.732062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.732075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.732081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.732089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.732104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.741949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.742004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.742019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.742026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.742032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.742047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.752048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.752104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.752117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.752124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.752130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.752145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.762106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.762160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.762173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.762179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.762186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.762201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.772116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.772193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.772209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.772216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.772223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.772238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.782125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.782178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.782192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.782199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.782205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.782220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.792102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.792171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.792186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.792193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.792201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.792218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 22:45:34.802219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.950 [2024-12-14 22:45:34.802276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.950 [2024-12-14 22:45:34.802289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.950 [2024-12-14 22:45:34.802296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.950 [2024-12-14 22:45:34.802302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.950 [2024-12-14 22:45:34.802317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 22:45:34.812160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.951 [2024-12-14 22:45:34.812219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.951 [2024-12-14 22:45:34.812232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.951 [2024-12-14 22:45:34.812239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.951 [2024-12-14 22:45:34.812246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.951 [2024-12-14 22:45:34.812261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 22:45:34.822248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.951 [2024-12-14 22:45:34.822314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.951 [2024-12-14 22:45:34.822328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.951 [2024-12-14 22:45:34.822334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.951 [2024-12-14 22:45:34.822341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:13.951 [2024-12-14 22:45:34.822357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.951 qpair failed and we were unable to recover it. 00:36:14.211 [2024-12-14 22:45:34.832275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.211 [2024-12-14 22:45:34.832327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.211 [2024-12-14 22:45:34.832340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.211 [2024-12-14 22:45:34.832347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.211 [2024-12-14 22:45:34.832353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.211 [2024-12-14 22:45:34.832368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.211 qpair failed and we were unable to recover it. 00:36:14.211 [2024-12-14 22:45:34.842273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.211 [2024-12-14 22:45:34.842331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.211 [2024-12-14 22:45:34.842346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.211 [2024-12-14 22:45:34.842353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.211 [2024-12-14 22:45:34.842359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.211 [2024-12-14 22:45:34.842376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.211 qpair failed and we were unable to recover it. 00:36:14.211 [2024-12-14 22:45:34.852341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.211 [2024-12-14 22:45:34.852397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.852410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.852420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.852427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.852442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.862297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.862357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.862371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.862378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.862385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.862400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.872332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.872389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.872402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.872408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.872415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.872431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.882467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.882522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.882535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.882541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.882548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.882562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.892470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.892522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.892536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.892543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.892549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.892567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.902495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.902552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.902565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.902572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.902578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.902594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.912453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.912513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.912526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.912533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.912539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.912554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.922489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.922547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.922560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.922567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.922573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.922588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.932616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.932666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.932679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.932686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.932692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.932707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.942612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.942665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.942679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.942686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.942693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.942708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.952648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.952700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.952713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.952720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.952726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.952741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.962594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.962651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.962665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.962671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.962678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.962693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.972714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.972770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.972783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.212 [2024-12-14 22:45:34.972790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.212 [2024-12-14 22:45:34.972796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.212 [2024-12-14 22:45:34.972811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.212 qpair failed and we were unable to recover it. 00:36:14.212 [2024-12-14 22:45:34.982704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.212 [2024-12-14 22:45:34.982759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.212 [2024-12-14 22:45:34.982776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:34.982783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:34.982790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:34.982805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:34.992739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:34.992804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:34.992817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:34.992824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:34.992830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:34.992846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.002784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.002839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.002852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.002859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.002865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.002880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.012814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.012865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.012879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.012886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.012893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.012919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.022829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.022879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.022892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.022898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.022907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.022926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.032859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.032915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.032928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.032935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.032941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.032956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.042934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.043006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.043022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.043029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.043036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.043051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.052920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.052994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.053008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.053015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.053021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.053037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.062948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.063004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.063018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.063025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.063031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.063046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.072973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.073026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.073039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.073046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.073053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.073068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.083039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.083126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.083140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.083148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.083154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.083169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.213 [2024-12-14 22:45:35.093061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.213 [2024-12-14 22:45:35.093120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.213 [2024-12-14 22:45:35.093133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.213 [2024-12-14 22:45:35.093139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.213 [2024-12-14 22:45:35.093146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.213 [2024-12-14 22:45:35.093161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.213 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.103021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.103079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.103092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.103100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.103107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.103122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.113050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.113109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.113127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.113134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.113141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.113156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.123084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.123139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.123153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.123160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.123166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.123181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.133149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.133205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.133217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.133224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.133231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.133245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.143174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.143227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.143241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.143248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.143254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.143269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.153208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.153257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.153270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.153277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.153290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.153305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.163239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.163294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.163308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.163314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.163321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.163336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.173233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.173291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.173304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.173311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.173317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.173333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.183295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.183374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.183387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.183395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.183401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.474 [2024-12-14 22:45:35.183416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.474 qpair failed and we were unable to recover it. 00:36:14.474 [2024-12-14 22:45:35.193255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.474 [2024-12-14 22:45:35.193304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.474 [2024-12-14 22:45:35.193317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.474 [2024-12-14 22:45:35.193324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.474 [2024-12-14 22:45:35.193330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.193345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.203351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.203407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.203420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.203426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.203433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.203447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.213373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.213430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.213443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.213449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.213456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.213470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.223408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.223463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.223476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.223483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.223489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.223504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.233471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.233534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.233547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.233554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.233560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.233575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.243419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.243472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.243489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.243496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.243503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.243517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.253479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.253537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.253550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.253557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.253564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.253578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.263522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.263577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.263590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.263597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.263603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.263619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.273539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.273590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.273603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.273609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.273616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.273632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.283578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.283631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.283644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.283654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.283660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.283675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.293599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.293654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.293667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.293674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.293680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.293695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.303629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.303684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.303698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.303704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.303711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.303726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.313691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.313743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.313757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.313764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.313770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.313785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.475 [2024-12-14 22:45:35.323755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.475 [2024-12-14 22:45:35.323811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.475 [2024-12-14 22:45:35.323825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.475 [2024-12-14 22:45:35.323832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.475 [2024-12-14 22:45:35.323839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.475 [2024-12-14 22:45:35.323855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.475 qpair failed and we were unable to recover it. 00:36:14.476 [2024-12-14 22:45:35.333719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.476 [2024-12-14 22:45:35.333776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.476 [2024-12-14 22:45:35.333789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.476 [2024-12-14 22:45:35.333796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.476 [2024-12-14 22:45:35.333802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.476 [2024-12-14 22:45:35.333817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.476 qpair failed and we were unable to recover it. 00:36:14.476 [2024-12-14 22:45:35.343750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.476 [2024-12-14 22:45:35.343806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.476 [2024-12-14 22:45:35.343821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.476 [2024-12-14 22:45:35.343827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.476 [2024-12-14 22:45:35.343834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.476 [2024-12-14 22:45:35.343849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.476 qpair failed and we were unable to recover it. 00:36:14.476 [2024-12-14 22:45:35.353771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.476 [2024-12-14 22:45:35.353826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.476 [2024-12-14 22:45:35.353839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.476 [2024-12-14 22:45:35.353846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.476 [2024-12-14 22:45:35.353852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.476 [2024-12-14 22:45:35.353868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.476 qpair failed and we were unable to recover it. 00:36:14.735 [2024-12-14 22:45:35.363813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.735 [2024-12-14 22:45:35.363874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.735 [2024-12-14 22:45:35.363888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.735 [2024-12-14 22:45:35.363895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.735 [2024-12-14 22:45:35.363905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.735 [2024-12-14 22:45:35.363922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.735 qpair failed and we were unable to recover it. 00:36:14.735 [2024-12-14 22:45:35.373884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.735 [2024-12-14 22:45:35.373949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.735 [2024-12-14 22:45:35.373963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.735 [2024-12-14 22:45:35.373970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.735 [2024-12-14 22:45:35.373976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.735 [2024-12-14 22:45:35.373991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.735 qpair failed and we were unable to recover it. 00:36:14.735 [2024-12-14 22:45:35.383864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.735 [2024-12-14 22:45:35.383920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.735 [2024-12-14 22:45:35.383933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.735 [2024-12-14 22:45:35.383940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.735 [2024-12-14 22:45:35.383946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.735 [2024-12-14 22:45:35.383961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.735 qpair failed and we were unable to recover it. 00:36:14.735 [2024-12-14 22:45:35.393934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.735 [2024-12-14 22:45:35.394020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.735 [2024-12-14 22:45:35.394033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.735 [2024-12-14 22:45:35.394040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.735 [2024-12-14 22:45:35.394046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.394061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.403920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.403976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.403989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.403995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.404002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.404016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.413964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.414021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.414034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.414044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.414051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.414066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.423969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.424030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.424043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.424051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.424057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.424073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.434000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.434050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.434063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.434071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.434077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.434092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.444063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.444121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.444136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.444143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.444149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.444165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.454066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.454121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.454134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.454141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.454148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.454166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.464097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.464170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.464184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.464191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.464197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.464213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.474132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.474187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.474201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.474208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.474214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.474230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.484202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.484257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.484270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.484277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.484283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.484299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.494184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.494237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.494251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.494257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.494264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.494279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.504212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.504298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.504312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.504319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.504325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.504340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.514238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.514289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.514303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.514310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.514316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.514331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.524321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.736 [2024-12-14 22:45:35.524378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.736 [2024-12-14 22:45:35.524392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.736 [2024-12-14 22:45:35.524399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.736 [2024-12-14 22:45:35.524405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.736 [2024-12-14 22:45:35.524421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.736 qpair failed and we were unable to recover it. 00:36:14.736 [2024-12-14 22:45:35.534297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.534351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.534364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.534371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.534377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.534392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.544335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.544388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.544405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.544413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.544419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.544434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.554388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.554439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.554452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.554459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.554466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.554480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.564404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.564458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.564472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.564478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.564485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.564500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.574417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.574475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.574488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.574495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.574502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.574517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.584445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.584497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.584511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.584518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.584527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.584541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.594527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.594584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.594598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.594605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.594611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.594626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.604516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.604601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.604615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.604623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.604629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.604644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.737 [2024-12-14 22:45:35.614592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.737 [2024-12-14 22:45:35.614648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.737 [2024-12-14 22:45:35.614662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.737 [2024-12-14 22:45:35.614669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.737 [2024-12-14 22:45:35.614676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.737 [2024-12-14 22:45:35.614691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.737 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.624585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.624672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.624686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.624693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.624700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.624716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.634597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.634650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.634663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.634670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.634677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.634692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.644684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.644790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.644805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.644812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.644818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.644833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.654698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.654755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.654769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.654775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.654782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.654797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.664682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.664738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.664752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.664759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.664766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.664780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.674710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.674768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.674787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.674794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.674800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.674815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.684795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.684900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.684918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.684925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.684930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.684945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.694773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.694830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.694843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.694850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.694857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.694871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.704790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.704847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.704860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.704867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.704874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.704888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.714855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.714920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.714934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.714941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.714950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.714965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.724866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.724923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.724936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.724943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.998 [2024-12-14 22:45:35.724949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.998 [2024-12-14 22:45:35.724965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.998 qpair failed and we were unable to recover it. 00:36:14.998 [2024-12-14 22:45:35.734887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.998 [2024-12-14 22:45:35.734946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.998 [2024-12-14 22:45:35.734960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.998 [2024-12-14 22:45:35.734968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.734974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.734990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.744932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.744983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.744998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.745005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.745012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.745028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.754967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.755019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.755032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.755039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.755046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.755061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.764977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.765034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.765050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.765057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.765065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.765080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.775038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.775141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.775157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.775164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.775170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.775186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.785044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.785099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.785112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.785118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.785125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.785139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.795054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.795107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.795119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.795126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.795133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.795148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.805090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.805159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.805175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.805182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.805188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.805202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.815106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.815156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.815169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.815176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.815182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.815197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.825124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.825180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.825194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.825201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.825207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.825223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.835161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.835218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.835230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.835237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.835243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.835259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.845122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.845181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.845194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.845207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.845213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.845228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.855197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.855250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.855264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.855270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.855276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.855292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.999 qpair failed and we were unable to recover it. 00:36:14.999 [2024-12-14 22:45:35.865246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.999 [2024-12-14 22:45:35.865298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.999 [2024-12-14 22:45:35.865312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.999 [2024-12-14 22:45:35.865318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.999 [2024-12-14 22:45:35.865325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:14.999 [2024-12-14 22:45:35.865341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.000 qpair failed and we were unable to recover it. 00:36:15.000 [2024-12-14 22:45:35.875265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.000 [2024-12-14 22:45:35.875317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.000 [2024-12-14 22:45:35.875330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.000 [2024-12-14 22:45:35.875337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.000 [2024-12-14 22:45:35.875343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.000 [2024-12-14 22:45:35.875359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.000 qpair failed and we were unable to recover it. 00:36:15.260 [2024-12-14 22:45:35.885299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.260 [2024-12-14 22:45:35.885357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.260 [2024-12-14 22:45:35.885371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.260 [2024-12-14 22:45:35.885378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.260 [2024-12-14 22:45:35.885384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.885399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.895297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.895350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.895363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.895370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.895376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.895391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.905351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.905411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.905424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.905431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.905437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.905452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.915423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.915483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.915497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.915503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.915509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.915524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.925431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.925486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.925499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.925506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.925512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.925526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.935440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.935500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.935513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.935520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.935526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.935540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.945468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.945525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.945539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.945547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.945553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.945567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.955408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.955463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.955476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.955483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.955489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.955503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.965528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.965584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.965597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.965604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.965611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.965627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.975482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.975534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.975548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.975557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.975563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.975579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.985602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.985670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.985686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.985693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.985699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.985714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:35.995598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:35.995651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:35.995664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:35.995671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:35.995677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:35.995692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:36.005574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:36.005629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:36.005643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:36.005650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:36.005656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.261 [2024-12-14 22:45:36.005671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.261 qpair failed and we were unable to recover it. 00:36:15.261 [2024-12-14 22:45:36.015643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.261 [2024-12-14 22:45:36.015698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.261 [2024-12-14 22:45:36.015711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.261 [2024-12-14 22:45:36.015718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.261 [2024-12-14 22:45:36.015724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.015745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.025688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.025739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.025752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.025759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.025764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.025779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.035711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.035767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.035780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.035787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.035793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.035808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.045759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.045814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.045828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.045835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.045841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.045856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.055784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.055863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.055878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.055885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.055892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.055911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.065802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.065854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.065868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.065876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.065883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.065898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.075839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.075890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.075906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.075914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.075920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.075935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.085869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.085945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.085958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.085965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.085971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.085985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.095904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.095974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.095987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.095995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.096001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.096016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.105920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.105973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.105990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.105996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.106002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.106017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.115948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.116007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.116022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.116029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.116035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.116050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.125980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.126034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.126048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.126055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.126060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.126076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-12-14 22:45:36.136004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-12-14 22:45:36.136062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-12-14 22:45:36.136075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-12-14 22:45:36.136082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-12-14 22:45:36.136089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.262 [2024-12-14 22:45:36.136104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 22:45:36.146149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-12-14 22:45:36.146213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-12-14 22:45:36.146227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-12-14 22:45:36.146234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-12-14 22:45:36.146244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.523 [2024-12-14 22:45:36.146260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 22:45:36.156130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-12-14 22:45:36.156188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-12-14 22:45:36.156203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-12-14 22:45:36.156210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-12-14 22:45:36.156217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.523 [2024-12-14 22:45:36.156232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 22:45:36.166134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.166191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.166205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.166211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.166217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.166233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.176149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.176204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.176218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.176224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.176232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.176246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.186142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.186198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.186215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.186222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.186228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.186245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.196206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.196259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.196272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.196279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.196287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.196301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.206151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.206207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.206220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.206227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.206233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.206249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.216145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.216212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.216226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.216234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.216240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.216256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.226285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.226343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.226356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.226364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.226370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.226387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.236281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.236333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.236350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.236357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.236364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.236379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.246336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.246406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.246420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.246427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.246433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.246449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.256265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.256316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.256330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.256336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.256342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.256357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.266363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.266418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.266432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.266439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.266445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.266461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.276307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.276365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.276379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.276386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.276396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.276410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.286425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.286494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.286508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-12-14 22:45:36.286514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-12-14 22:45:36.286521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.524 [2024-12-14 22:45:36.286536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 22:45:36.296445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-12-14 22:45:36.296498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-12-14 22:45:36.296511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.296518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.296525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.296540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.306452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.306523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.306536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.306543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.306549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.306564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.316428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.316482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.316496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.316503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.316509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.316524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.326527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.326624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.326639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.326646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.326652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.326667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.336489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.336573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.336588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.336595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.336601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.336617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.346515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.346579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.346594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.346600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.346607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.346622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.356536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.356587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.356600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.356607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.356614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.356628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.366583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.366643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.366662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.366670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.366677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.366693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.376673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.376756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.376771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.376779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.376785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.376800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.386696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.386759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.386773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.386780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.386786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.386803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 22:45:36.396711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-12-14 22:45:36.396790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-12-14 22:45:36.396815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-12-14 22:45:36.396823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-12-14 22:45:36.396830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.525 [2024-12-14 22:45:36.396850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.406737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-12-14 22:45:36.406814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-12-14 22:45:36.406828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-12-14 22:45:36.406838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-12-14 22:45:36.406845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.786 [2024-12-14 22:45:36.406860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.416811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-12-14 22:45:36.416871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-12-14 22:45:36.416885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-12-14 22:45:36.416892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-12-14 22:45:36.416898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.786 [2024-12-14 22:45:36.416919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.426751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-12-14 22:45:36.426809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-12-14 22:45:36.426823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-12-14 22:45:36.426830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-12-14 22:45:36.426837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.786 [2024-12-14 22:45:36.426852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.436889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-12-14 22:45:36.436949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-12-14 22:45:36.436963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-12-14 22:45:36.436969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-12-14 22:45:36.436976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.786 [2024-12-14 22:45:36.436991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.446935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-12-14 22:45:36.446992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-12-14 22:45:36.447006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-12-14 22:45:36.447013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-12-14 22:45:36.447020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.786 [2024-12-14 22:45:36.447035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-12-14 22:45:36.456838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.456893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.456910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.456917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.456925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.456940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.466962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.467033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.467049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.467056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.467062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.467077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.476926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.477016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.477031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.477038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.477044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.477060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.486933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.486987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.487001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.487008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.487014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.487030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.496993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.497053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.497065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.497072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.497078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.497093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.507047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.507102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.507116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.507123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.507129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.507143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.517074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.517125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.517139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.517146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.517152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.517168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.527038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.527093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.527106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.527113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.527119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.527134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.537053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.537102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.537116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.537126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.537132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.537147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.547172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.547228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.547242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.547249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.547255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.547270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.557123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.557175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.557189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.557196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.557202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.557216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.567158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.567210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.567224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.567230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.567236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.567252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.577278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.577338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.577351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-12-14 22:45:36.577358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-12-14 22:45:36.577365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.787 [2024-12-14 22:45:36.577387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-12-14 22:45:36.587206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-12-14 22:45:36.587259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-12-14 22:45:36.587272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.587279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.587285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.587299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.597238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.597288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.597301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.597308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.597314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.597329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.607345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.607400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.607414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.607420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.607427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.607443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.617283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.617381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.617396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.617404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.617411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.617426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.627378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.627435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.627448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.627455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.627461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.627476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.637426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.637475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.637489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.637496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.637503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.637518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.647463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.647517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.647531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.647537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.647543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.647558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.657424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.657478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.657491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.657498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.657504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.657518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:15.788 [2024-12-14 22:45:36.667504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.788 [2024-12-14 22:45:36.667557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.788 [2024-12-14 22:45:36.667574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.788 [2024-12-14 22:45:36.667581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.788 [2024-12-14 22:45:36.667588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:15.788 [2024-12-14 22:45:36.667603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.788 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.677535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.677582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.677595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.677602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.677609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.677624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.687576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.687632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.687646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.687652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.687659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.687674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.697640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.697708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.697721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.697729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.697735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.697750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.707626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.707683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.707696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.707703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.707713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.707727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.717625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.717675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.717688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.717695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.717702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.717717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.727691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.727748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.727762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.727768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.727775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.727790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.737727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.737779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.737792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.737799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.737805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.737820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.747746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.747794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.747809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.747816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.747822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.747837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.757768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.757819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-12-14 22:45:36.757832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-12-14 22:45:36.757838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-12-14 22:45:36.757845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.049 [2024-12-14 22:45:36.757860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-12-14 22:45:36.767849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-12-14 22:45:36.767910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.767924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.767931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.767937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.767953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.777823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.777873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.777887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.777893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.777900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.777919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.787849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.787944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.787959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.787966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.787973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.787988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.797920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.797975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.797991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.797999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.798005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.798020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.807888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.807954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.807969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.807977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.807983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.807998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.817949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.818003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.818016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.818023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.818029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.818045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.827965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.828018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.828031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.828038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.828045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.828061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.837916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.837976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.837991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.837998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.838008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.838024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.848021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.848079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.848093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.848100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.848106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.848123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.858079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.858137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.858151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.858158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.858165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.858180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.868096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.868147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.868161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.868168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.868174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.868190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.878117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.878177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.878193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.878200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.878207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.878223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.888175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.888232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.888245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.888252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.888258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.050 [2024-12-14 22:45:36.888274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.050 qpair failed and we were unable to recover it. 00:36:16.050 [2024-12-14 22:45:36.898171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.050 [2024-12-14 22:45:36.898224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.050 [2024-12-14 22:45:36.898236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.050 [2024-12-14 22:45:36.898243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.050 [2024-12-14 22:45:36.898250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.051 [2024-12-14 22:45:36.898264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.051 qpair failed and we were unable to recover it. 00:36:16.051 [2024-12-14 22:45:36.908165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.051 [2024-12-14 22:45:36.908217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.051 [2024-12-14 22:45:36.908230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.051 [2024-12-14 22:45:36.908236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.051 [2024-12-14 22:45:36.908242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.051 [2024-12-14 22:45:36.908257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.051 qpair failed and we were unable to recover it. 00:36:16.051 [2024-12-14 22:45:36.918269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.051 [2024-12-14 22:45:36.918321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.051 [2024-12-14 22:45:36.918335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.051 [2024-12-14 22:45:36.918341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.051 [2024-12-14 22:45:36.918348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.051 [2024-12-14 22:45:36.918364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.051 qpair failed and we were unable to recover it. 00:36:16.051 [2024-12-14 22:45:36.928156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.051 [2024-12-14 22:45:36.928212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.051 [2024-12-14 22:45:36.928228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.051 [2024-12-14 22:45:36.928235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.051 [2024-12-14 22:45:36.928241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.051 [2024-12-14 22:45:36.928257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.051 qpair failed and we were unable to recover it. 00:36:16.311 [2024-12-14 22:45:36.938252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.311 [2024-12-14 22:45:36.938307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.311 [2024-12-14 22:45:36.938321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.311 [2024-12-14 22:45:36.938328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.311 [2024-12-14 22:45:36.938335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.311 [2024-12-14 22:45:36.938350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.311 qpair failed and we were unable to recover it. 00:36:16.311 [2024-12-14 22:45:36.948357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.311 [2024-12-14 22:45:36.948409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.311 [2024-12-14 22:45:36.948424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.311 [2024-12-14 22:45:36.948431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.311 [2024-12-14 22:45:36.948437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.311 [2024-12-14 22:45:36.948453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.311 qpair failed and we were unable to recover it. 00:36:16.311 [2024-12-14 22:45:36.958248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.311 [2024-12-14 22:45:36.958300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.311 [2024-12-14 22:45:36.958313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.311 [2024-12-14 22:45:36.958320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.311 [2024-12-14 22:45:36.958327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.311 [2024-12-14 22:45:36.958342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.311 qpair failed and we were unable to recover it. 00:36:16.311 [2024-12-14 22:45:36.968355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.311 [2024-12-14 22:45:36.968410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:36.968424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:36.968433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:36.968440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:36.968456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:36.978389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:36.978458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:36.978472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:36.978479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:36.978485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:36.978500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:36.988408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:36.988462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:36.988476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:36.988483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:36.988489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:36.988504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:36.998479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:36.998534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:36.998548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:36.998554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:36.998560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:36.998575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.008483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.008538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.008551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.008557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.008564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.008582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.018504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.018564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.018578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.018585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.018591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.018607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.028528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.028583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.028597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.028604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.028609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.028625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.038534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.038590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.038604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.038610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.038617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.038632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.048597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.048658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.048672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.048679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.048684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.048700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.058610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.058664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.058678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.058685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.058691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.058706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.068634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.068688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.068702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.068709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.068715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.068730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.078675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.078740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.078754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.078761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.078768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.078783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.088709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.088766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.088780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.088787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.088793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.312 [2024-12-14 22:45:37.088809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.312 qpair failed and we were unable to recover it. 00:36:16.312 [2024-12-14 22:45:37.098731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.312 [2024-12-14 22:45:37.098801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.312 [2024-12-14 22:45:37.098816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.312 [2024-12-14 22:45:37.098826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.312 [2024-12-14 22:45:37.098833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.098849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.108753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.108804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.108818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.108824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.108830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.108845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.118770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.118826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.118840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.118847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.118854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.118869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.128752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.128840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.128855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.128862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.128869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.128884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.138835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.138887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.138901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.138912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.138918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.138936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.148844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.148896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.148914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.148920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.148927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.148942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.158858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.158918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.158932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.158939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.158946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.158961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.168945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.169000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.169014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.169020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.169026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.169041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.179029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.179088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.179102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.179108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.179115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.179130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.313 [2024-12-14 22:45:37.188973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.313 [2024-12-14 22:45:37.189040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.313 [2024-12-14 22:45:37.189055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.313 [2024-12-14 22:45:37.189062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.313 [2024-12-14 22:45:37.189069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.313 [2024-12-14 22:45:37.189084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.313 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.199008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.199066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.199080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.199087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.199095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.199110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.209070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.209129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.209143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.209150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.209157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.209172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.219079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.219131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.219145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.219151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.219158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.219172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.229097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.229151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.229168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.229175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.229181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.229196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.239124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.239175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.239188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.239195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.239202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.239217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.249183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.249251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.249266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.249273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.249279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.249295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.259220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.259273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.259287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.259294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.259300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.259315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.269141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.269197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.269211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.269219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.269232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.269247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.279232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.279287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.279300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.279307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.279313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.279328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.289221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.289294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.289307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.289314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.289320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.289335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.299242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.299294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.299307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.299314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.299320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.299335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.309338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.309388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.309401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.574 [2024-12-14 22:45:37.309408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.574 [2024-12-14 22:45:37.309415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.574 [2024-12-14 22:45:37.309430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-12-14 22:45:37.319339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.574 [2024-12-14 22:45:37.319416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.574 [2024-12-14 22:45:37.319430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.319437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.319443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.319458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.329414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.329472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.329485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.329491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.329498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.329515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.339455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.339508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.339522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.339529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.339536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.339551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.349451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.349506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.349521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.349528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.349535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.349550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.359475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.359562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.359579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.359586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.359593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.359608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.369511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.369567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.369581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.369587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.369594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.369609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.379530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.379583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.379596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.379603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.379609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.379623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.389559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.389613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.389626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.389633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.389639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.389654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.399561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.399659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.399674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.399681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.399692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.399707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.409631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.409687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.409700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.409707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.409713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.409729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.419576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.419639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.419653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.419659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.419665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.419680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.429704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.429773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.429787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.429794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.429800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.429815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.439708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.439762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.439775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.439782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.439788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.439803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 22:45:37.449718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 22:45:37.449776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 22:45:37.449790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 22:45:37.449797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 22:45:37.449804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.575 [2024-12-14 22:45:37.449819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 22:45:37.459765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 22:45:37.459824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.836 [2024-12-14 22:45:37.459838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.836 [2024-12-14 22:45:37.459846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.836 [2024-12-14 22:45:37.459852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.836 [2024-12-14 22:45:37.459867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.836 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 22:45:37.469790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 22:45:37.469851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.836 [2024-12-14 22:45:37.469865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.836 [2024-12-14 22:45:37.469873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.836 [2024-12-14 22:45:37.469879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.836 [2024-12-14 22:45:37.469894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.836 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.479852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.479913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.479927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.479934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.479940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.479955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.489787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.489880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.489897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.489907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.489913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.489928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.499876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.499933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.499947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.499954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.499960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.499975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.509899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.509955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.509968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.509975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.509981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.509996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.519933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.519985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.519998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.520005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.520012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.520027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.530019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.530075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.530088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.530099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.530105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.530121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.539989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.540046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.540059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.540065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.540072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.540087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.550007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.550059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.550073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.550080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.550086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.550101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.560044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.560098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.560112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.560118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.560125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.560140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.570091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.570144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.570157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.570164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.570170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.570189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.580109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.580167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.580180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.580186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.580194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.580208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.590139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.590209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.590223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.590230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.590236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.590251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.600168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 22:45:37.600222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 22:45:37.600236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 22:45:37.600242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 22:45:37.600248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.837 [2024-12-14 22:45:37.600263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 22:45:37.610230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.610287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.610300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.610307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.610314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.610329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.620143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.620204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.620217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.620224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.620231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.620246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.630251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.630305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.630318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.630325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.630331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.630346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.640213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.640267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.640280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.640286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.640293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.640308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.650256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.650311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.650325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.650332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.650339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.650354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.660406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.660489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.660505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.660516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.660522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.660537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.670375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.670432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.670445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.670452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.670459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.670474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.680322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.680377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.680390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.680397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.680404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.680418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.690368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.690427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.690441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.690448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.690455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.690469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.700459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.700513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.700526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.700533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.700540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.700558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 22:45:37.710481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 22:45:37.710534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 22:45:37.710547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 22:45:37.710554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 22:45:37.710560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:16.838 [2024-12-14 22:45:37.710574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 22:45:37.720441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 22:45:37.720497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 22:45:37.720511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 22:45:37.720518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 22:45:37.720524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.099 [2024-12-14 22:45:37.720539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 22:45:37.730542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 22:45:37.730596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 22:45:37.730608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 22:45:37.730615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 22:45:37.730622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.099 [2024-12-14 22:45:37.730637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 22:45:37.740585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 22:45:37.740643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 22:45:37.740657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 22:45:37.740663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 22:45:37.740670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.099 [2024-12-14 22:45:37.740684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 22:45:37.750514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 22:45:37.750562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 22:45:37.750576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 22:45:37.750583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.750589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.750606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.760712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.760763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.760777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.760784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.760791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.760807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.770688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.770743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.770757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.770764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.770770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.770786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.780684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.780737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.780750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.780756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.780764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.780779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.790689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.790745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.790762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.790769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.790776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.790790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.800732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.800790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.800803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.800811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.800817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.800832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.810696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.810753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.810766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.810773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.810779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.810794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.820786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.820854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.820868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.820875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.820881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.820896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.830794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.830860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.830874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.830880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.830889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.830908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.840891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.840956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.840970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.840977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.840983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.840998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.850871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.850971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.850987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.850994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.851001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.851016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.860897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.860960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.860974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.860980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.860987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.861002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.870930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.870985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.871000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.871006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.871012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.871028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 22:45:37.880944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 22:45:37.881002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 22:45:37.881016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 22:45:37.881023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 22:45:37.881029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.100 [2024-12-14 22:45:37.881043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.890987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.891047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.891061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.891068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.891075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.891090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.900999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.901057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.901071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.901077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.901085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.901100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.911021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.911081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.911094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.911101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.911108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.911122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.920985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.921038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.921055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.921063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.921071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.921087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.931103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.931160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.931173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.931180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.931186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.931201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.941115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.941171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.941185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.941191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.941197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.941212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.951091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.951146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.951160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.951167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.951173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.951188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.961176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.961226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.961240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.961247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.961259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.961275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.971207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.971264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.971277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.971283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.971290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.971305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.101 [2024-12-14 22:45:37.981279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.101 [2024-12-14 22:45:37.981333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.101 [2024-12-14 22:45:37.981347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.101 [2024-12-14 22:45:37.981354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.101 [2024-12-14 22:45:37.981360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.101 [2024-12-14 22:45:37.981376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.101 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:37.991190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:37.991246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:37.991259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:37.991267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:37.991274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:37.991288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.001283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.001333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.001348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.001355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.001361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.001376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.011318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.011377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.011391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.011397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.011404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.011419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.021373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.021440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.021453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.021461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.021467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.021481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.031369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.031422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.031435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.031441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.031448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.031463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.041389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.041445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.041458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.041465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.041471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.041486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.051427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.051483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.051500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.051507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.051514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.051529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.061457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.061509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.061523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.061529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.061536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.061551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.071524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.071577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.071590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.071596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.071603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.362 [2024-12-14 22:45:38.071618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.362 qpair failed and we were unable to recover it. 00:36:17.362 [2024-12-14 22:45:38.081569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.362 [2024-12-14 22:45:38.081656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.362 [2024-12-14 22:45:38.081669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.362 [2024-12-14 22:45:38.081676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.362 [2024-12-14 22:45:38.081682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.081697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.091465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.091520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.091533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.091543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.091549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.091564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.101557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.101611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.101625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.101632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.101638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.101653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.111601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.111656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.111670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.111676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.111683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.111698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.121616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.121671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.121685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.121692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.121698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.121713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.131666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.131723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.131737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.131744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.131750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.131768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.141677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.141727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.141741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.141748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.141755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.141770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.151710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.151762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.151777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.151783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.151790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.151805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.161756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.161812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.161826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.161833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.161840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.161855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.171786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.171844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.171858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.171864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.171871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.171886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.181804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.181863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.181877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.181883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.181890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.181908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.191757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.191813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.191826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.191833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.191840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.191854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.201936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.202022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.202035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.202042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.202048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.202064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.363 [2024-12-14 22:45:38.211870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.363 [2024-12-14 22:45:38.211932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.363 [2024-12-14 22:45:38.211946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.363 [2024-12-14 22:45:38.211953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.363 [2024-12-14 22:45:38.211960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.363 [2024-12-14 22:45:38.211975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.363 qpair failed and we were unable to recover it. 00:36:17.364 [2024-12-14 22:45:38.221946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.364 [2024-12-14 22:45:38.222004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.364 [2024-12-14 22:45:38.222017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.364 [2024-12-14 22:45:38.222027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.364 [2024-12-14 22:45:38.222034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.364 [2024-12-14 22:45:38.222049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.364 qpair failed and we were unable to recover it. 00:36:17.364 [2024-12-14 22:45:38.231958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.364 [2024-12-14 22:45:38.232012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.364 [2024-12-14 22:45:38.232025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.364 [2024-12-14 22:45:38.232032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.364 [2024-12-14 22:45:38.232038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.364 [2024-12-14 22:45:38.232054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.364 qpair failed and we were unable to recover it. 00:36:17.364 [2024-12-14 22:45:38.241991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.364 [2024-12-14 22:45:38.242052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.364 [2024-12-14 22:45:38.242066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.364 [2024-12-14 22:45:38.242073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.364 [2024-12-14 22:45:38.242079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.364 [2024-12-14 22:45:38.242094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.364 qpair failed and we were unable to recover it. 00:36:17.624 [2024-12-14 22:45:38.251981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.252039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.252054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.252062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.252069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.252083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.262072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.262126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.262141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.262148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.262154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.262172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.272046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.272104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.272117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.272124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.272131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.272146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.282082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.282137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.282150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.282157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.282163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.282177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.292124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.292178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.292191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.292198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.292203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.292218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.302139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.302191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.302204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.302211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.302217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.302231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.312169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.312218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.312232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.312239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.312245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.312259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.322137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.322190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.322204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.322211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.322217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.322232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.332244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.332306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.332319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.332326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.332332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.332347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.342310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.342371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.342384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.342391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.342397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.342411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.352275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.352328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.352345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.352352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.352358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.352373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.362247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.362308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.362323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.362330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.362336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.362352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.372354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.372441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.372456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.372463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.625 [2024-12-14 22:45:38.372469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.625 [2024-12-14 22:45:38.372484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.625 qpair failed and we were unable to recover it. 00:36:17.625 [2024-12-14 22:45:38.382367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.625 [2024-12-14 22:45:38.382422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.625 [2024-12-14 22:45:38.382435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.625 [2024-12-14 22:45:38.382443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.382449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.382464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.392403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.392458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.392471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.392478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.392487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.392502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.402454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.402509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.402522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.402529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.402535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.402550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.412402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.412508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.412523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.412529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.412536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.412551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.422476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.422531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.422545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.422551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.422557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.422573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.432428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.432486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.432499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.432506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.432512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.432527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.442536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.442592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.442606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.442613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.442619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.442634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.452541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.452607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.452622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.452629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.452636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.452651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.462615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.462672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.462686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.462692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.462699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.462714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.472624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.472680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.472693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.472700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.472707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.472722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.482652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.482703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.482719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.482726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.482732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.482748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.492684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.492740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.492753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.492760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.492766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.492782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.626 [2024-12-14 22:45:38.502712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.626 [2024-12-14 22:45:38.502766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.626 [2024-12-14 22:45:38.502780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.626 [2024-12-14 22:45:38.502787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.626 [2024-12-14 22:45:38.502793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.626 [2024-12-14 22:45:38.502808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.626 qpair failed and we were unable to recover it. 00:36:17.888 [2024-12-14 22:45:38.512741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.888 [2024-12-14 22:45:38.512801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.888 [2024-12-14 22:45:38.512816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.888 [2024-12-14 22:45:38.512823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.888 [2024-12-14 22:45:38.512831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.888 [2024-12-14 22:45:38.512848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-12-14 22:45:38.522772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.888 [2024-12-14 22:45:38.522824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.888 [2024-12-14 22:45:38.522838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.888 [2024-12-14 22:45:38.522845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.888 [2024-12-14 22:45:38.522854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.888 [2024-12-14 22:45:38.522870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-12-14 22:45:38.532830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.888 [2024-12-14 22:45:38.532883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.888 [2024-12-14 22:45:38.532896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.888 [2024-12-14 22:45:38.532908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.888 [2024-12-14 22:45:38.532915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.888 [2024-12-14 22:45:38.532932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-12-14 22:45:38.542829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.542881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.542894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.542901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.542910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.542925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.552852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.552909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.552924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.552931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.552937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.552953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.562881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.562977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.562993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.563000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.563007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.563024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.572928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.572987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.573000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.573007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.573014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.573029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.582939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.582993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.583007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.583013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.583019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.583035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.592950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.593037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.593052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.593059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.593066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.593081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.602983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.603039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.603053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.603059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.603066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.603081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.612957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.613024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.613039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.613046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.613052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.613067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.623052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.623102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.623117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.623123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.623130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.623144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.633080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.633139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.633152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.633159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.633165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.633181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.643102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.643159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.643172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.643179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.643185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.643200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.653147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.653205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.653220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.653230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.653238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.653253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.663166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.663217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.663232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.663240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.889 [2024-12-14 22:45:38.663247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.889 [2024-12-14 22:45:38.663263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-12-14 22:45:38.673138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.889 [2024-12-14 22:45:38.673190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.889 [2024-12-14 22:45:38.673205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.889 [2024-12-14 22:45:38.673212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.673220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.673237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.683219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.683273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.683287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.683296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.683303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.683320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.693253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.693304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.693319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.693327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.693335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.693357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.703322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.703424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.703438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.703446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.703453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.703470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.713229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.713282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.713296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.713305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.713313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.713329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.723333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.723386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.723400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.723409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.723416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.723432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.733376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.733428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.733443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.733451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.733459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.733475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.743431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.743487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.743501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.743510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.743517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.743534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.753446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.753544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.753559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.753568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.753575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.753592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-12-14 22:45:38.763510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.890 [2024-12-14 22:45:38.763561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.890 [2024-12-14 22:45:38.763576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.890 [2024-12-14 22:45:38.763585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.890 [2024-12-14 22:45:38.763592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:17.890 [2024-12-14 22:45:38.763608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.890 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.773576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.773632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.773646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.773656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.773664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.773682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.783556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.783615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.783630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.783641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.783649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.783667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.793542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.793594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.793609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.793618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.793626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.793643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.803583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.803633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.803648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.803656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.803664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.803681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.813599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.813654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.813668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.813677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.813683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.813697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.823648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.823704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.823717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.823724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.823731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.823749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.833668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.833720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.833733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.833740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.833747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.833761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-12-14 22:45:38.843740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.150 [2024-12-14 22:45:38.843793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.150 [2024-12-14 22:45:38.843806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.150 [2024-12-14 22:45:38.843814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.150 [2024-12-14 22:45:38.843820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.150 [2024-12-14 22:45:38.843835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.853736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.853792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.853806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.853813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.853820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.853835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.863683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.863741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.863756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.863763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.863770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.863785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.873779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.873836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.873850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.873857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.873863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.873879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.883807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.883861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.883874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.883881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.883887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.883906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.893853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.893913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.893927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.893934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.893940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.893955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.903878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.903934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.903948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.903955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.903961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.903976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.913910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.913959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.913975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.913982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.913989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.914004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.923932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.923985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.923999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.924005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.924012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.924027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.933967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.934025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.934039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.934046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.934053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.934068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.944008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.944062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.944075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.944082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.944089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.944104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.954034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.954083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.954097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.954104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.954114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.954131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.964056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.964110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.964124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.964131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.964138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.964154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.974078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.151 [2024-12-14 22:45:38.974134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.151 [2024-12-14 22:45:38.974148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.151 [2024-12-14 22:45:38.974155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.151 [2024-12-14 22:45:38.974162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.151 [2024-12-14 22:45:38.974177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-12-14 22:45:38.984101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.152 [2024-12-14 22:45:38.984158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.152 [2024-12-14 22:45:38.984171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.152 [2024-12-14 22:45:38.984179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.152 [2024-12-14 22:45:38.984186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.152 [2024-12-14 22:45:38.984202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-12-14 22:45:38.994134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.152 [2024-12-14 22:45:38.994188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.152 [2024-12-14 22:45:38.994202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.152 [2024-12-14 22:45:38.994210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.152 [2024-12-14 22:45:38.994217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.152 [2024-12-14 22:45:38.994233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-12-14 22:45:39.004124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.152 [2024-12-14 22:45:39.004176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.152 [2024-12-14 22:45:39.004190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.152 [2024-12-14 22:45:39.004197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.152 [2024-12-14 22:45:39.004203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.152 [2024-12-14 22:45:39.004218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-12-14 22:45:39.014196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.152 [2024-12-14 22:45:39.014255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.152 [2024-12-14 22:45:39.014269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.152 [2024-12-14 22:45:39.014277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.152 [2024-12-14 22:45:39.014283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.152 [2024-12-14 22:45:39.014298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-12-14 22:45:39.024224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.152 [2024-12-14 22:45:39.024287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.152 [2024-12-14 22:45:39.024302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.152 [2024-12-14 22:45:39.024309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.152 [2024-12-14 22:45:39.024316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.152 [2024-12-14 22:45:39.024331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.411 [2024-12-14 22:45:39.034295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.411 [2024-12-14 22:45:39.034363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.411 [2024-12-14 22:45:39.034378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.411 [2024-12-14 22:45:39.034385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.034391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.034407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.044282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.044339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.044356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.044363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.044370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.044385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.054324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.054383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.054397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.054404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.054410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.054425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.064346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.064398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.064412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.064419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.064426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.064441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.074356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.074415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.074430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.074437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.074444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.074459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.084388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.084449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.084464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.084471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.084481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.084495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.094423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.094484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.094499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.094506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.094512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.094527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.104366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.104423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.104436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.104443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.104449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.104464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.114473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.114528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.114543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.114550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.114557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.114572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.124505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.124561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.124575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.124583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.124589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.124604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.134534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.134590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.134603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.134610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.134617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.134632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.144529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.144628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.144643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.144650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.412 [2024-12-14 22:45:39.144657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.412 [2024-12-14 22:45:39.144672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.412 qpair failed and we were unable to recover it. 00:36:18.412 [2024-12-14 22:45:39.154552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.412 [2024-12-14 22:45:39.154607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.412 [2024-12-14 22:45:39.154620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.412 [2024-12-14 22:45:39.154627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.154632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.413 [2024-12-14 22:45:39.154649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.164623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.164677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.164693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.164700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.164706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.413 [2024-12-14 22:45:39.164722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.174645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.174707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.174721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.174728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.174734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.413 [2024-12-14 22:45:39.174749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.184613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.184665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.184679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.184685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.184692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a98000b90 00:36:18.413 [2024-12-14 22:45:39.184707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.194729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.194826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.194883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.194920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.194942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a94000b90 00:36:18.413 [2024-12-14 22:45:39.194994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.204695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.204775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.204805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.204820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.204833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a94000b90 00:36:18.413 [2024-12-14 22:45:39.204864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.204981] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:18.413 A controller has encountered a failure and is being reset. 00:36:18.413 [2024-12-14 22:45:39.214792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.214892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.214983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.215008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.215030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5fd6a0 00:36:18.413 [2024-12-14 22:45:39.215081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 [2024-12-14 22:45:39.224816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.413 [2024-12-14 22:45:39.224909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.413 [2024-12-14 22:45:39.224940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.413 [2024-12-14 22:45:39.224956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.413 [2024-12-14 22:45:39.224970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5fd6a0 00:36:18.413 [2024-12-14 22:45:39.225000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:18.413 qpair failed and we were unable to recover it. 00:36:18.413 Controller properly reset. 00:36:18.413 [2024-12-14 22:45:39.245680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:36:18.413 Initializing NVMe Controllers 00:36:18.413 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:18.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:18.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:18.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:18.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:18.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:18.413 Initialization complete. Launching workers. 00:36:18.413 Starting thread on core 1 00:36:18.413 Starting thread on core 2 00:36:18.413 Starting thread on core 3 00:36:18.413 Starting thread on core 0 00:36:18.413 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:18.413 00:36:18.413 real 0m10.640s 00:36:18.413 user 0m19.289s 00:36:18.413 sys 0m4.729s 00:36:18.413 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.413 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.413 ************************************ 00:36:18.413 END TEST nvmf_target_disconnect_tc2 00:36:18.413 ************************************ 00:36:18.413 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:18.413 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.672 rmmod nvme_tcp 00:36:18.672 rmmod nvme_fabrics 00:36:18.672 rmmod nvme_keyring 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 542998 ']' 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 542998 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 542998 ']' 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 542998 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542998 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542998' 00:36:18.672 killing process with pid 542998 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 542998 00:36:18.672 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 542998 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.931 22:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.837 00:36:20.837 real 0m19.390s 00:36:20.837 user 0m46.614s 00:36:20.837 sys 0m9.655s 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:20.837 ************************************ 00:36:20.837 END TEST nvmf_target_disconnect 00:36:20.837 ************************************ 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:20.837 00:36:20.837 real 7m24.603s 00:36:20.837 user 16m53.749s 00:36:20.837 sys 2m8.267s 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.837 22:45:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.837 ************************************ 00:36:20.837 END TEST nvmf_host 00:36:20.837 ************************************ 00:36:21.096 22:45:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:21.096 22:45:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:21.096 22:45:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:21.096 22:45:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:21.096 22:45:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.096 22:45:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.096 ************************************ 00:36:21.096 START TEST nvmf_target_core_interrupt_mode 00:36:21.096 ************************************ 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:21.096 * Looking for test storage... 00:36:21.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:21.096 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.097 --rc genhtml_branch_coverage=1 00:36:21.097 --rc genhtml_function_coverage=1 00:36:21.097 --rc genhtml_legend=1 00:36:21.097 --rc geninfo_all_blocks=1 00:36:21.097 --rc geninfo_unexecuted_blocks=1 00:36:21.097 00:36:21.097 ' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.097 --rc genhtml_branch_coverage=1 00:36:21.097 --rc genhtml_function_coverage=1 00:36:21.097 --rc genhtml_legend=1 00:36:21.097 --rc geninfo_all_blocks=1 00:36:21.097 --rc geninfo_unexecuted_blocks=1 00:36:21.097 00:36:21.097 ' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.097 --rc genhtml_branch_coverage=1 00:36:21.097 --rc genhtml_function_coverage=1 00:36:21.097 --rc genhtml_legend=1 00:36:21.097 --rc geninfo_all_blocks=1 00:36:21.097 --rc geninfo_unexecuted_blocks=1 00:36:21.097 00:36:21.097 ' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.097 --rc genhtml_branch_coverage=1 00:36:21.097 --rc genhtml_function_coverage=1 00:36:21.097 --rc genhtml_legend=1 00:36:21.097 --rc geninfo_all_blocks=1 00:36:21.097 --rc geninfo_unexecuted_blocks=1 00:36:21.097 00:36:21.097 ' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.097 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.357 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:21.357 ************************************ 00:36:21.357 START TEST nvmf_abort 00:36:21.357 ************************************ 00:36:21.357 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:21.357 * Looking for test storage... 00:36:21.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.357 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.357 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.358 --rc genhtml_branch_coverage=1 00:36:21.358 --rc genhtml_function_coverage=1 00:36:21.358 --rc genhtml_legend=1 00:36:21.358 --rc geninfo_all_blocks=1 00:36:21.358 --rc geninfo_unexecuted_blocks=1 00:36:21.358 00:36:21.358 ' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.358 --rc genhtml_branch_coverage=1 00:36:21.358 --rc genhtml_function_coverage=1 00:36:21.358 --rc genhtml_legend=1 00:36:21.358 --rc geninfo_all_blocks=1 00:36:21.358 --rc geninfo_unexecuted_blocks=1 00:36:21.358 00:36:21.358 ' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.358 --rc genhtml_branch_coverage=1 00:36:21.358 --rc genhtml_function_coverage=1 00:36:21.358 --rc genhtml_legend=1 00:36:21.358 --rc geninfo_all_blocks=1 00:36:21.358 --rc geninfo_unexecuted_blocks=1 00:36:21.358 00:36:21.358 ' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.358 --rc genhtml_branch_coverage=1 00:36:21.358 --rc genhtml_function_coverage=1 00:36:21.358 --rc genhtml_legend=1 00:36:21.358 --rc geninfo_all_blocks=1 00:36:21.358 --rc geninfo_unexecuted_blocks=1 00:36:21.358 00:36:21.358 ' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.358 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.359 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.927 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:27.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:27.928 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:27.928 Found net devices under 0000:af:00.0: cvl_0_0 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:27.928 Found net devices under 0000:af:00.1: cvl_0_1 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.928 22:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:27.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:36:27.928 00:36:27.928 --- 10.0.0.2 ping statistics --- 00:36:27.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.928 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:36:27.928 00:36:27.928 --- 10.0.0.1 ping statistics --- 00:36:27.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.928 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=547663 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 547663 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 547663 ']' 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.928 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.928 [2024-12-14 22:45:48.249090] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:27.928 [2024-12-14 22:45:48.249978] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:27.928 [2024-12-14 22:45:48.250011] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.928 [2024-12-14 22:45:48.325213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:27.928 [2024-12-14 22:45:48.347020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.928 [2024-12-14 22:45:48.347055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.928 [2024-12-14 22:45:48.347062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.928 [2024-12-14 22:45:48.347068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.929 [2024-12-14 22:45:48.347073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.929 [2024-12-14 22:45:48.348353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:27.929 [2024-12-14 22:45:48.348469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.929 [2024-12-14 22:45:48.348470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:27.929 [2024-12-14 22:45:48.409979] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:27.929 [2024-12-14 22:45:48.410812] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:27.929 [2024-12-14 22:45:48.411225] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:27.929 [2024-12-14 22:45:48.411320] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 [2024-12-14 22:45:48.477228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 Malloc0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 Delay0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 [2024-12-14 22:45:48.569086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.929 22:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:27.929 [2024-12-14 22:45:48.737969] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:30.463 Initializing NVMe Controllers 00:36:30.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:30.463 controller IO queue size 128 less than required 00:36:30.463 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:30.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:30.463 Initialization complete. Launching workers. 00:36:30.463 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37935 00:36:30.463 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37992, failed to submit 66 00:36:30.463 success 37935, unsuccessful 57, failed 0 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.463 rmmod nvme_tcp 00:36:30.463 rmmod nvme_fabrics 00:36:30.463 rmmod nvme_keyring 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 547663 ']' 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 547663 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 547663 ']' 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 547663 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547663 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547663' 00:36:30.463 killing process with pid 547663 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 547663 00:36:30.463 22:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 547663 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.464 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:32.370 00:36:32.370 real 0m11.187s 00:36:32.370 user 0m10.476s 00:36:32.370 sys 0m5.630s 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.370 ************************************ 00:36:32.370 END TEST nvmf_abort 00:36:32.370 ************************************ 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:32.370 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:32.630 ************************************ 00:36:32.630 START TEST nvmf_ns_hotplug_stress 00:36:32.630 ************************************ 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:32.630 * Looking for test storage... 00:36:32.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.630 --rc genhtml_branch_coverage=1 00:36:32.630 --rc genhtml_function_coverage=1 00:36:32.630 --rc genhtml_legend=1 00:36:32.630 --rc geninfo_all_blocks=1 00:36:32.630 --rc geninfo_unexecuted_blocks=1 00:36:32.630 00:36:32.630 ' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.630 --rc genhtml_branch_coverage=1 00:36:32.630 --rc genhtml_function_coverage=1 00:36:32.630 --rc genhtml_legend=1 00:36:32.630 --rc geninfo_all_blocks=1 00:36:32.630 --rc geninfo_unexecuted_blocks=1 00:36:32.630 00:36:32.630 ' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.630 --rc genhtml_branch_coverage=1 00:36:32.630 --rc genhtml_function_coverage=1 00:36:32.630 --rc genhtml_legend=1 00:36:32.630 --rc geninfo_all_blocks=1 00:36:32.630 --rc geninfo_unexecuted_blocks=1 00:36:32.630 00:36:32.630 ' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.630 --rc genhtml_branch_coverage=1 00:36:32.630 --rc genhtml_function_coverage=1 00:36:32.630 --rc genhtml_legend=1 00:36:32.630 --rc geninfo_all_blocks=1 00:36:32.630 --rc geninfo_unexecuted_blocks=1 00:36:32.630 00:36:32.630 ' 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.630 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:32.631 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:39.204 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.204 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:39.205 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:39.205 Found net devices under 0000:af:00.0: cvl_0_0 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:39.205 Found net devices under 0000:af:00.1: cvl_0_1 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.205 22:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:36:39.205 00:36:39.205 --- 10.0.0.2 ping statistics --- 00:36:39.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.205 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:36:39.205 00:36:39.205 --- 10.0.0.1 ping statistics --- 00:36:39.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.205 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=551373 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 551373 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 551373 ']' 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.205 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:39.205 [2024-12-14 22:45:59.253748] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:39.205 [2024-12-14 22:45:59.254696] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:39.205 [2024-12-14 22:45:59.254733] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.205 [2024-12-14 22:45:59.333237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:39.206 [2024-12-14 22:45:59.355685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.206 [2024-12-14 22:45:59.355720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.206 [2024-12-14 22:45:59.355728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.206 [2024-12-14 22:45:59.355734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.206 [2024-12-14 22:45:59.355739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.206 [2024-12-14 22:45:59.357027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:39.206 [2024-12-14 22:45:59.357132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.206 [2024-12-14 22:45:59.357134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:39.206 [2024-12-14 22:45:59.420322] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:39.206 [2024-12-14 22:45:59.421149] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:39.206 [2024-12-14 22:45:59.421405] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:39.206 [2024-12-14 22:45:59.421557] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:39.206 [2024-12-14 22:45:59.653944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:39.206 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.206 [2024-12-14 22:46:00.038513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.206 22:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:39.465 22:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:39.723 Malloc0 00:36:39.723 22:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:39.981 Delay0 00:36:39.981 22:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.981 22:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:40.240 NULL1 00:36:40.240 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:40.498 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=551838 00:36:40.498 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:40.498 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:40.498 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.875 Read completed with error (sct=0, sc=11) 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.875 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:41.875 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:42.134 true 00:36:42.134 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:42.134 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.071 22:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.071 22:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:43.071 22:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:43.330 true 00:36:43.330 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:43.330 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.588 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.847 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:43.847 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:43.847 true 00:36:43.848 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:43.848 22:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.228 22:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.228 22:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:45.228 22:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:45.487 true 00:36:45.487 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:45.487 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.487 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.746 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:45.746 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:46.005 true 00:36:46.005 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:46.005 22:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.382 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:47.382 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:47.641 true 00:36:47.641 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:47.641 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.577 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.577 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:48.577 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:48.836 true 00:36:48.836 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:48.836 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.095 22:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.354 22:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:49.354 22:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:49.354 true 00:36:49.354 22:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:49.612 22:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.549 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.549 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:50.549 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:50.807 true 00:36:50.807 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:50.807 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.066 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.325 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:51.325 22:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:51.325 true 00:36:51.325 22:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:51.325 22:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 22:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.701 22:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:52.701 22:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:52.960 true 00:36:52.960 22:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:52.960 22:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.897 22:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.897 22:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:53.897 22:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:54.156 true 00:36:54.156 22:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:54.156 22:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.423 22:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.423 22:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:54.423 22:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:54.682 true 00:36:54.682 22:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:54.682 22:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.618 22:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.877 22:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:55.877 22:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:56.136 true 00:36:56.136 22:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:56.136 22:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.395 22:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.653 22:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:56.653 22:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:56.653 true 00:36:56.653 22:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:56.654 22:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.031 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.032 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:58.032 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:58.290 true 00:36:58.290 22:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:58.290 22:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.226 22:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.226 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:59.226 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:59.485 true 00:36:59.485 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:36:59.485 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.744 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.003 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:00.003 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:00.003 true 00:37:00.262 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:00.262 22:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.199 22:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.457 22:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:01.457 22:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:01.716 true 00:37:01.716 22:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:01.716 22:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.652 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.652 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:02.652 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:02.911 true 00:37:02.911 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:02.911 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.170 22:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.429 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:03.429 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:03.429 true 00:37:03.429 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:03.429 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.813 22:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.072 22:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:05.072 22:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:05.072 true 00:37:05.072 22:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:05.072 22:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.009 22:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.268 22:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:06.268 22:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:06.268 true 00:37:06.268 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:06.268 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.527 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.786 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:06.786 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:07.045 true 00:37:07.045 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:07.045 22:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.982 22:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.242 22:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:08.242 22:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:08.500 true 00:37:08.500 22:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:08.500 22:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.437 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.437 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:09.437 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:09.696 true 00:37:09.696 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:09.696 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.955 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.214 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:10.214 22:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:10.214 true 00:37:10.214 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:10.214 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.592 Initializing NVMe Controllers 00:37:11.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:11.592 Controller IO queue size 128, less than required. 00:37:11.592 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:11.592 Controller IO queue size 128, less than required. 00:37:11.592 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:11.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:11.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:11.592 Initialization complete. Launching workers. 00:37:11.592 ======================================================== 00:37:11.592 Latency(us) 00:37:11.592 Device Information : IOPS MiB/s Average min max 00:37:11.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2014.98 0.98 43714.50 2516.84 1018881.22 00:37:11.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17697.24 8.64 7232.51 1292.88 368094.83 00:37:11.592 ======================================================== 00:37:11.592 Total : 19712.22 9.63 10961.69 1292.88 1018881.22 00:37:11.592 00:37:11.592 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.592 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:11.592 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:11.851 true 00:37:11.851 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551838 00:37:11.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (551838) - No such process 00:37:11.851 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 551838 00:37:11.851 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.147 22:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:12.435 null0 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.435 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:12.719 null1 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:12.719 null2 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.719 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:13.024 null3 00:37:13.024 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.024 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.024 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:13.283 null4 00:37:13.283 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.283 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.283 22:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:13.283 null5 00:37:13.283 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.283 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.283 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:13.541 null6 00:37:13.541 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.541 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.541 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:13.799 null7 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 557037 557039 557040 557042 557044 557046 557048 557050 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.800 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.059 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.319 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.319 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.577 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.836 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.094 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.095 22:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.353 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.354 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:15.612 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.871 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.130 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.131 22:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.390 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.649 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.649 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.649 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.650 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.910 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.170 22:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.429 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.688 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.948 rmmod nvme_tcp 00:37:17.948 rmmod nvme_fabrics 00:37:17.948 rmmod nvme_keyring 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 551373 ']' 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 551373 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 551373 ']' 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 551373 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551373 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551373' 00:37:17.948 killing process with pid 551373 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 551373 00:37:17.948 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 551373 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.207 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.208 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.208 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.208 22:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.112 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.112 00:37:20.112 real 0m47.717s 00:37:20.112 user 3m0.447s 00:37:20.112 sys 0m19.889s 00:37:20.112 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.112 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.112 ************************************ 00:37:20.112 END TEST nvmf_ns_hotplug_stress 00:37:20.112 ************************************ 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:20.372 ************************************ 00:37:20.372 START TEST nvmf_delete_subsystem 00:37:20.372 ************************************ 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:20.372 * Looking for test storage... 00:37:20.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.372 --rc genhtml_branch_coverage=1 00:37:20.372 --rc genhtml_function_coverage=1 00:37:20.372 --rc genhtml_legend=1 00:37:20.372 --rc geninfo_all_blocks=1 00:37:20.372 --rc geninfo_unexecuted_blocks=1 00:37:20.372 00:37:20.372 ' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.372 --rc genhtml_branch_coverage=1 00:37:20.372 --rc genhtml_function_coverage=1 00:37:20.372 --rc genhtml_legend=1 00:37:20.372 --rc geninfo_all_blocks=1 00:37:20.372 --rc geninfo_unexecuted_blocks=1 00:37:20.372 00:37:20.372 ' 00:37:20.372 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.372 --rc genhtml_branch_coverage=1 00:37:20.372 --rc genhtml_function_coverage=1 00:37:20.372 --rc genhtml_legend=1 00:37:20.373 --rc geninfo_all_blocks=1 00:37:20.373 --rc geninfo_unexecuted_blocks=1 00:37:20.373 00:37:20.373 ' 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:20.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.373 --rc genhtml_branch_coverage=1 00:37:20.373 --rc genhtml_function_coverage=1 00:37:20.373 --rc genhtml_legend=1 00:37:20.373 --rc geninfo_all_blocks=1 00:37:20.373 --rc geninfo_unexecuted_blocks=1 00:37:20.373 00:37:20.373 ' 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.373 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.632 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:25.905 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:25.906 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:25.906 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:25.906 Found net devices under 0000:af:00.0: cvl_0_0 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:25.906 Found net devices under 0000:af:00.1: cvl_0_1 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:25.906 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:25.907 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:26.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:26.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:37:26.166 00:37:26.166 --- 10.0.0.2 ping statistics --- 00:37:26.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.166 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:26.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:26.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:37:26.166 00:37:26.166 --- 10.0.0.1 ping statistics --- 00:37:26.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.166 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:26.166 22:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=561335 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 561335 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 561335 ']' 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:26.166 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.425 [2024-12-14 22:46:47.059234] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:26.425 [2024-12-14 22:46:47.060164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:26.425 [2024-12-14 22:46:47.060198] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.425 [2024-12-14 22:46:47.135406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:26.425 [2024-12-14 22:46:47.156946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:26.425 [2024-12-14 22:46:47.156983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:26.425 [2024-12-14 22:46:47.156991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:26.425 [2024-12-14 22:46:47.156997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:26.425 [2024-12-14 22:46:47.157003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:26.425 [2024-12-14 22:46:47.158080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.425 [2024-12-14 22:46:47.158084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.425 [2024-12-14 22:46:47.220587] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:26.425 [2024-12-14 22:46:47.221143] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:26.425 [2024-12-14 22:46:47.221316] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.425 [2024-12-14 22:46:47.294817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.425 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.684 [2024-12-14 22:46:47.319087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.684 NULL1 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.684 Delay0 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=561362 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:26.684 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:26.684 [2024-12-14 22:46:47.425281] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:28.588 22:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.588 22:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.588 22:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 [2024-12-14 22:46:49.558104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3920 is same with the state(6) to be set 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 starting I/O failed: -6 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 [2024-12-14 22:46:49.558744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fb400d4d0 is same with the state(6) to be set 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Write completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.848 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:28.849 Write completed with error (sct=0, sc=8) 00:37:28.849 Read completed with error (sct=0, sc=8) 00:37:29.786 [2024-12-14 22:46:50.520126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4c260 is same with the state(6) to be set 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Write completed with error (sct=0, sc=8) 00:37:29.786 Write completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Write completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Write completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.786 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 [2024-12-14 22:46:50.560379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4ec60 is same with the state(6) to be set 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 [2024-12-14 22:46:50.560555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa35f0 is same with the state(6) to be set 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 [2024-12-14 22:46:50.561008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fb400d800 is same with the state(6) to be set 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Write completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 Read completed with error (sct=0, sc=8) 00:37:29.787 [2024-12-14 22:46:50.561725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fb400d060 is same with the state(6) to be set 00:37:29.787 Initializing NVMe Controllers 00:37:29.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:29.787 Controller IO queue size 128, less than required. 00:37:29.787 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:29.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:29.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:29.787 Initialization complete. Launching workers. 00:37:29.787 ======================================================== 00:37:29.787 Latency(us) 00:37:29.787 Device Information : IOPS MiB/s Average min max 00:37:29.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.26 0.08 918464.07 447.92 2001434.37 00:37:29.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.74 0.08 898867.49 269.25 1013174.96 00:37:29.787 ======================================================== 00:37:29.787 Total : 332.99 0.16 908592.77 269.25 2001434.37 00:37:29.787 00:37:29.787 [2024-12-14 22:46:50.562421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4c260 (9): Bad file descriptor 00:37:29.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:29.787 22:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.787 22:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:29.787 22:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561362 00:37:29.787 22:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561362 00:37:30.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (561362) - No such process 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 561362 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 561362 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 561362 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:30.355 [2024-12-14 22:46:51.091035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=561955 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:30.355 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:30.355 [2024-12-14 22:46:51.172358] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:30.922 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:30.922 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:30.922 22:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.490 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.490 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:31.490 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.749 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.749 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:31.749 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:32.317 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:32.317 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:32.317 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:32.884 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:32.884 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:32.884 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:33.452 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:33.452 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:33.452 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:33.710 Initializing NVMe Controllers 00:37:33.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:33.710 Controller IO queue size 128, less than required. 00:37:33.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:33.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:33.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:33.710 Initialization complete. Launching workers. 00:37:33.710 ======================================================== 00:37:33.710 Latency(us) 00:37:33.710 Device Information : IOPS MiB/s Average min max 00:37:33.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002268.66 1000134.85 1008382.19 00:37:33.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003692.52 1000326.57 1009407.21 00:37:33.710 ======================================================== 00:37:33.710 Total : 256.00 0.12 1002980.59 1000134.85 1009407.21 00:37:33.710 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561955 00:37:33.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (561955) - No such process 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 561955 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.970 rmmod nvme_tcp 00:37:33.970 rmmod nvme_fabrics 00:37:33.970 rmmod nvme_keyring 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 561335 ']' 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 561335 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 561335 ']' 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 561335 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561335 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561335' 00:37:33.970 killing process with pid 561335 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 561335 00:37:33.970 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 561335 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.229 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.133 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.133 00:37:36.133 real 0m15.926s 00:37:36.133 user 0m26.130s 00:37:36.133 sys 0m5.849s 00:37:36.133 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.133 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:36.133 ************************************ 00:37:36.133 END TEST nvmf_delete_subsystem 00:37:36.133 ************************************ 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:36.393 ************************************ 00:37:36.393 START TEST nvmf_host_management 00:37:36.393 ************************************ 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:36.393 * Looking for test storage... 00:37:36.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.393 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:36.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.394 --rc genhtml_branch_coverage=1 00:37:36.394 --rc genhtml_function_coverage=1 00:37:36.394 --rc genhtml_legend=1 00:37:36.394 --rc geninfo_all_blocks=1 00:37:36.394 --rc geninfo_unexecuted_blocks=1 00:37:36.394 00:37:36.394 ' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:36.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.394 --rc genhtml_branch_coverage=1 00:37:36.394 --rc genhtml_function_coverage=1 00:37:36.394 --rc genhtml_legend=1 00:37:36.394 --rc geninfo_all_blocks=1 00:37:36.394 --rc geninfo_unexecuted_blocks=1 00:37:36.394 00:37:36.394 ' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:36.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.394 --rc genhtml_branch_coverage=1 00:37:36.394 --rc genhtml_function_coverage=1 00:37:36.394 --rc genhtml_legend=1 00:37:36.394 --rc geninfo_all_blocks=1 00:37:36.394 --rc geninfo_unexecuted_blocks=1 00:37:36.394 00:37:36.394 ' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:36.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.394 --rc genhtml_branch_coverage=1 00:37:36.394 --rc genhtml_function_coverage=1 00:37:36.394 --rc genhtml_legend=1 00:37:36.394 --rc geninfo_all_blocks=1 00:37:36.394 --rc geninfo_unexecuted_blocks=1 00:37:36.394 00:37:36.394 ' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.394 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:42.963 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:42.964 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:42.964 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:42.964 Found net devices under 0000:af:00.0: cvl_0_0 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:42.964 Found net devices under 0000:af:00.1: cvl_0_1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:37:42.964 00:37:42.964 --- 10.0.0.2 ping statistics --- 00:37:42.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.964 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:37:42.964 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:37:42.964 00:37:42.964 --- 10.0.0.1 ping statistics --- 00:37:42.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.965 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=565951 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 565951 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565951 ']' 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.965 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 [2024-12-14 22:47:03.028475] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:42.965 [2024-12-14 22:47:03.029386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:42.965 [2024-12-14 22:47:03.029420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.965 [2024-12-14 22:47:03.097358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:42.965 [2024-12-14 22:47:03.119790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.965 [2024-12-14 22:47:03.119828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.965 [2024-12-14 22:47:03.119836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.965 [2024-12-14 22:47:03.119842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.965 [2024-12-14 22:47:03.119847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.965 [2024-12-14 22:47:03.121332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:42.965 [2024-12-14 22:47:03.121420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:42.965 [2024-12-14 22:47:03.121525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.965 [2024-12-14 22:47:03.121527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:37:42.965 [2024-12-14 22:47:03.184256] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:42.965 [2024-12-14 22:47:03.185492] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:42.965 [2024-12-14 22:47:03.185582] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:42.965 [2024-12-14 22:47:03.186004] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:42.965 [2024-12-14 22:47:03.186036] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 [2024-12-14 22:47:03.266353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 Malloc0 00:37:42.965 [2024-12-14 22:47:03.358561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=565996 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 565996 /var/tmp/bdevperf.sock 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565996 ']' 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:42.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:42.965 { 00:37:42.965 "params": { 00:37:42.965 "name": "Nvme$subsystem", 00:37:42.965 "trtype": "$TEST_TRANSPORT", 00:37:42.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:42.965 "adrfam": "ipv4", 00:37:42.965 "trsvcid": "$NVMF_PORT", 00:37:42.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:42.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:42.965 "hdgst": ${hdgst:-false}, 00:37:42.965 "ddgst": ${ddgst:-false} 00:37:42.965 }, 00:37:42.965 "method": "bdev_nvme_attach_controller" 00:37:42.965 } 00:37:42.965 EOF 00:37:42.965 )") 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:42.965 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:42.965 "params": { 00:37:42.965 "name": "Nvme0", 00:37:42.965 "trtype": "tcp", 00:37:42.965 "traddr": "10.0.0.2", 00:37:42.965 "adrfam": "ipv4", 00:37:42.965 "trsvcid": "4420", 00:37:42.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:42.966 "hdgst": false, 00:37:42.966 "ddgst": false 00:37:42.966 }, 00:37:42.966 "method": "bdev_nvme_attach_controller" 00:37:42.966 }' 00:37:42.966 [2024-12-14 22:47:03.451419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:42.966 [2024-12-14 22:47:03.451468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565996 ] 00:37:42.966 [2024-12-14 22:47:03.527286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.966 [2024-12-14 22:47:03.549562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.966 Running I/O for 10 seconds... 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=107 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 107 -ge 100 ']' 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.966 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.249 [2024-12-14 22:47:03.846570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.249 [2024-12-14 22:47:03.846614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.249 [2024-12-14 22:47:03.846629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.249 [2024-12-14 22:47:03.846637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.846990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.846998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.250 [2024-12-14 22:47:03.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.250 [2024-12-14 22:47:03.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.251 [2024-12-14 22:47:03.847566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.847589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.251 [2024-12-14 22:47:03.848507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:43.251 task offset: 24576 on job bdev=Nvme0n1 fails 00:37:43.251 00:37:43.251 Latency(us) 00:37:43.251 [2024-12-14T21:47:04.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.251 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:43.251 Job: Nvme0n1 ended in about 0.11 seconds with error 00:37:43.251 Verification LBA range: start 0x0 length 0x400 00:37:43.251 Nvme0n1 : 0.11 1766.88 110.43 588.96 0.00 25045.24 1568.18 27337.87 00:37:43.251 [2024-12-14T21:47:04.135Z] =================================================================================================================== 00:37:43.251 [2024-12-14T21:47:04.135Z] Total : 1766.88 110.43 588.96 0.00 25045.24 1568.18 27337.87 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.251 [2024-12-14 22:47:03.850874] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:43.251 [2024-12-14 22:47:03.850897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252fd40 (9): Bad file descriptor 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.251 [2024-12-14 22:47:03.851765] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:43.251 [2024-12-14 22:47:03.851838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.251 [2024-12-14 22:47:03.851860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:43.251 [2024-12-14 22:47:03.851875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:43.251 [2024-12-14 22:47:03.851882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:43.251 [2024-12-14 22:47:03.851889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.251 [2024-12-14 22:47:03.851895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x252fd40 00:37:43.251 [2024-12-14 22:47:03.851922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252fd40 (9): Bad file descriptor 00:37:43.251 [2024-12-14 22:47:03.851934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:43.251 [2024-12-14 22:47:03.851941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:43.251 [2024-12-14 22:47:03.851954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:43.251 [2024-12-14 22:47:03.851961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.251 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 565996 00:37:44.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (565996) - No such process 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:44.189 { 00:37:44.189 "params": { 00:37:44.189 "name": "Nvme$subsystem", 00:37:44.189 "trtype": "$TEST_TRANSPORT", 00:37:44.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.189 "adrfam": "ipv4", 00:37:44.189 "trsvcid": "$NVMF_PORT", 00:37:44.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.189 "hdgst": ${hdgst:-false}, 00:37:44.189 "ddgst": ${ddgst:-false} 00:37:44.189 }, 00:37:44.189 "method": "bdev_nvme_attach_controller" 00:37:44.189 } 00:37:44.189 EOF 00:37:44.189 )") 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:44.189 22:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:44.189 "params": { 00:37:44.189 "name": "Nvme0", 00:37:44.189 "trtype": "tcp", 00:37:44.189 "traddr": "10.0.0.2", 00:37:44.189 "adrfam": "ipv4", 00:37:44.189 "trsvcid": "4420", 00:37:44.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.189 "hdgst": false, 00:37:44.189 "ddgst": false 00:37:44.189 }, 00:37:44.189 "method": "bdev_nvme_attach_controller" 00:37:44.189 }' 00:37:44.189 [2024-12-14 22:47:04.914067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:44.189 [2024-12-14 22:47:04.914115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566234 ] 00:37:44.189 [2024-12-14 22:47:04.988503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.189 [2024-12-14 22:47:05.008911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.449 Running I/O for 1 seconds... 00:37:45.830 1984.00 IOPS, 124.00 MiB/s 00:37:45.830 Latency(us) 00:37:45.830 [2024-12-14T21:47:06.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.830 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:45.830 Verification LBA range: start 0x0 length 0x400 00:37:45.830 Nvme0n1 : 1.00 2042.94 127.68 0.00 0.00 30838.06 6491.18 26963.38 00:37:45.830 [2024-12-14T21:47:06.714Z] =================================================================================================================== 00:37:45.830 [2024-12-14T21:47:06.714Z] Total : 2042.94 127.68 0.00 0.00 30838.06 6491.18 26963.38 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.830 rmmod nvme_tcp 00:37:45.830 rmmod nvme_fabrics 00:37:45.830 rmmod nvme_keyring 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 565951 ']' 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 565951 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 565951 ']' 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 565951 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.830 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565951 00:37:45.831 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:45.831 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:45.831 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565951' 00:37:45.831 killing process with pid 565951 00:37:45.831 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 565951 00:37:45.831 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 565951 00:37:45.831 [2024-12-14 22:47:06.708870] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.090 22:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:47.994 00:37:47.994 real 0m11.779s 00:37:47.994 user 0m16.165s 00:37:47.994 sys 0m5.940s 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:47.994 ************************************ 00:37:47.994 END TEST nvmf_host_management 00:37:47.994 ************************************ 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.994 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:48.254 ************************************ 00:37:48.254 START TEST nvmf_lvol 00:37:48.254 ************************************ 00:37:48.254 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:48.254 * Looking for test storage... 00:37:48.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:48.254 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:48.254 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:48.254 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.254 --rc genhtml_branch_coverage=1 00:37:48.254 --rc genhtml_function_coverage=1 00:37:48.254 --rc genhtml_legend=1 00:37:48.254 --rc geninfo_all_blocks=1 00:37:48.254 --rc geninfo_unexecuted_blocks=1 00:37:48.254 00:37:48.254 ' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.254 --rc genhtml_branch_coverage=1 00:37:48.254 --rc genhtml_function_coverage=1 00:37:48.254 --rc genhtml_legend=1 00:37:48.254 --rc geninfo_all_blocks=1 00:37:48.254 --rc geninfo_unexecuted_blocks=1 00:37:48.254 00:37:48.254 ' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.254 --rc genhtml_branch_coverage=1 00:37:48.254 --rc genhtml_function_coverage=1 00:37:48.254 --rc genhtml_legend=1 00:37:48.254 --rc geninfo_all_blocks=1 00:37:48.254 --rc geninfo_unexecuted_blocks=1 00:37:48.254 00:37:48.254 ' 00:37:48.254 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.254 --rc genhtml_branch_coverage=1 00:37:48.254 --rc genhtml_function_coverage=1 00:37:48.255 --rc genhtml_legend=1 00:37:48.255 --rc geninfo_all_blocks=1 00:37:48.255 --rc geninfo_unexecuted_blocks=1 00:37:48.255 00:37:48.255 ' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.255 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:54.824 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:54.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:54.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:54.825 Found net devices under 0000:af:00.0: cvl_0_0 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:54.825 Found net devices under 0000:af:00.1: cvl_0_1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:54.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:37:54.825 00:37:54.825 --- 10.0.0.2 ping statistics --- 00:37:54.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.825 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:37:54.825 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:54.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:37:54.826 00:37:54.826 --- 10.0.0.1 ping statistics --- 00:37:54.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.826 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=569924 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 569924 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 569924 ']' 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.826 22:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:54.826 [2024-12-14 22:47:14.934328] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:54.826 [2024-12-14 22:47:14.935248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:54.826 [2024-12-14 22:47:14.935280] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.826 [2024-12-14 22:47:15.013319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:54.826 [2024-12-14 22:47:15.035493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.826 [2024-12-14 22:47:15.035529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.826 [2024-12-14 22:47:15.035536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.826 [2024-12-14 22:47:15.035542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.826 [2024-12-14 22:47:15.035551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.826 [2024-12-14 22:47:15.036802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.826 [2024-12-14 22:47:15.036912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.826 [2024-12-14 22:47:15.036925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.826 [2024-12-14 22:47:15.099145] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:54.826 [2024-12-14 22:47:15.099965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:54.826 [2024-12-14 22:47:15.100456] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:54.826 [2024-12-14 22:47:15.100527] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:54.826 [2024-12-14 22:47:15.337689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:54.826 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:55.085 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:55.085 22:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:55.343 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:55.602 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2c635f84-3c27-4d79-b2ea-d516c39d9ca3 00:37:55.602 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c635f84-3c27-4d79-b2ea-d516c39d9ca3 lvol 20 00:37:55.602 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5705a6bf-b4bf-4cc7-b9e9-14356687e019 00:37:55.602 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:55.861 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5705a6bf-b4bf-4cc7-b9e9-14356687e019 00:37:56.120 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:56.120 [2024-12-14 22:47:16.961592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.120 22:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:56.380 22:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=570400 00:37:56.380 22:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:56.380 22:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:57.758 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5705a6bf-b4bf-4cc7-b9e9-14356687e019 MY_SNAPSHOT 00:37:57.758 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=75558e3b-fa20-4b7d-9f38-5dd7207a7f02 00:37:57.758 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5705a6bf-b4bf-4cc7-b9e9-14356687e019 30 00:37:58.017 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 75558e3b-fa20-4b7d-9f38-5dd7207a7f02 MY_CLONE 00:37:58.276 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=33e4b5af-3013-4ab5-82e0-fb4438d80688 00:37:58.276 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 33e4b5af-3013-4ab5-82e0-fb4438d80688 00:37:58.534 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 570400 00:38:08.514 Initializing NVMe Controllers 00:38:08.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:08.514 Controller IO queue size 128, less than required. 00:38:08.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:08.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:08.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:08.514 Initialization complete. Launching workers. 00:38:08.514 ======================================================== 00:38:08.514 Latency(us) 00:38:08.514 Device Information : IOPS MiB/s Average min max 00:38:08.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12519.30 48.90 10223.25 330.78 65802.77 00:38:08.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12366.60 48.31 10349.53 2460.30 59916.65 00:38:08.514 ======================================================== 00:38:08.514 Total : 24885.90 97.21 10286.00 330.78 65802.77 00:38:08.514 00:38:08.514 22:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.514 22:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5705a6bf-b4bf-4cc7-b9e9-14356687e019 00:38:08.514 22:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c635f84-3c27-4d79-b2ea-d516c39d9ca3 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.514 rmmod nvme_tcp 00:38:08.514 rmmod nvme_fabrics 00:38:08.514 rmmod nvme_keyring 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 569924 ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 569924 ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569924' 00:38:08.514 killing process with pid 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 569924 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:08.514 22:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:09.894 00:38:09.894 real 0m21.594s 00:38:09.894 user 0m55.337s 00:38:09.894 sys 0m9.656s 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:09.894 ************************************ 00:38:09.894 END TEST nvmf_lvol 00:38:09.894 ************************************ 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:09.894 ************************************ 00:38:09.894 START TEST nvmf_lvs_grow 00:38:09.894 ************************************ 00:38:09.894 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:09.894 * Looking for test storage... 00:38:09.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:09.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.895 --rc genhtml_branch_coverage=1 00:38:09.895 --rc genhtml_function_coverage=1 00:38:09.895 --rc genhtml_legend=1 00:38:09.895 --rc geninfo_all_blocks=1 00:38:09.895 --rc geninfo_unexecuted_blocks=1 00:38:09.895 00:38:09.895 ' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:09.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.895 --rc genhtml_branch_coverage=1 00:38:09.895 --rc genhtml_function_coverage=1 00:38:09.895 --rc genhtml_legend=1 00:38:09.895 --rc geninfo_all_blocks=1 00:38:09.895 --rc geninfo_unexecuted_blocks=1 00:38:09.895 00:38:09.895 ' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:09.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.895 --rc genhtml_branch_coverage=1 00:38:09.895 --rc genhtml_function_coverage=1 00:38:09.895 --rc genhtml_legend=1 00:38:09.895 --rc geninfo_all_blocks=1 00:38:09.895 --rc geninfo_unexecuted_blocks=1 00:38:09.895 00:38:09.895 ' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:09.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.895 --rc genhtml_branch_coverage=1 00:38:09.895 --rc genhtml_function_coverage=1 00:38:09.895 --rc genhtml_legend=1 00:38:09.895 --rc geninfo_all_blocks=1 00:38:09.895 --rc geninfo_unexecuted_blocks=1 00:38:09.895 00:38:09.895 ' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.895 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.896 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.155 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.155 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:10.155 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:10.155 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:10.155 22:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:16.731 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:16.731 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:16.731 Found net devices under 0000:af:00.0: cvl_0_0 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:16.731 Found net devices under 0000:af:00.1: cvl_0_1 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:16.731 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:16.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:38:16.732 00:38:16.732 --- 10.0.0.2 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:16.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:38:16.732 00:38:16.732 --- 10.0.0.1 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575438 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575438 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575438 ']' 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 [2024-12-14 22:47:36.697780] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:16.732 [2024-12-14 22:47:36.698687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:16.732 [2024-12-14 22:47:36.698719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.732 [2024-12-14 22:47:36.776808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.732 [2024-12-14 22:47:36.798594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.732 [2024-12-14 22:47:36.798629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.732 [2024-12-14 22:47:36.798636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.732 [2024-12-14 22:47:36.798642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.732 [2024-12-14 22:47:36.798648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.732 [2024-12-14 22:47:36.799139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.732 [2024-12-14 22:47:36.862121] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:16.732 [2024-12-14 22:47:36.862335] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:16.732 22:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:16.732 [2024-12-14 22:47:37.095780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 ************************************ 00:38:16.732 START TEST lvs_grow_clean 00:38:16.732 ************************************ 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec405035-5e73-4742-8d7e-b357428e7302 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:16.732 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:16.991 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:16.991 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:16.992 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec405035-5e73-4742-8d7e-b357428e7302 lvol 150 00:38:17.250 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 00:38:17.250 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.250 22:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:17.509 [2024-12-14 22:47:38.143519] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:17.509 [2024-12-14 22:47:38.143645] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:17.509 true 00:38:17.509 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:17.509 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:17.509 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:17.509 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:17.768 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 00:38:18.026 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:18.286 [2024-12-14 22:47:38.916011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.286 22:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=575915 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 575915 /var/tmp/bdevperf.sock 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 575915 ']' 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:18.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.286 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:18.545 [2024-12-14 22:47:39.170697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:18.545 [2024-12-14 22:47:39.170743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575915 ] 00:38:18.545 [2024-12-14 22:47:39.245679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.545 [2024-12-14 22:47:39.268067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.545 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.545 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:18.545 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:19.112 Nvme0n1 00:38:19.112 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:19.112 [ 00:38:19.112 { 00:38:19.112 "name": "Nvme0n1", 00:38:19.112 "aliases": [ 00:38:19.112 "e928c557-0f3c-4b3f-8a0c-c5ec0f699d64" 00:38:19.112 ], 00:38:19.112 "product_name": "NVMe disk", 00:38:19.112 "block_size": 4096, 00:38:19.112 "num_blocks": 38912, 00:38:19.112 "uuid": "e928c557-0f3c-4b3f-8a0c-c5ec0f699d64", 00:38:19.112 "numa_id": 1, 00:38:19.112 "assigned_rate_limits": { 00:38:19.112 "rw_ios_per_sec": 0, 00:38:19.112 "rw_mbytes_per_sec": 0, 00:38:19.112 "r_mbytes_per_sec": 0, 00:38:19.112 "w_mbytes_per_sec": 0 00:38:19.112 }, 00:38:19.112 "claimed": false, 00:38:19.112 "zoned": false, 00:38:19.112 "supported_io_types": { 00:38:19.112 "read": true, 00:38:19.112 "write": true, 00:38:19.112 "unmap": true, 00:38:19.112 "flush": true, 00:38:19.112 "reset": true, 00:38:19.112 "nvme_admin": true, 00:38:19.112 "nvme_io": true, 00:38:19.112 "nvme_io_md": false, 00:38:19.112 "write_zeroes": true, 00:38:19.112 "zcopy": false, 00:38:19.112 "get_zone_info": false, 00:38:19.112 "zone_management": false, 00:38:19.112 "zone_append": false, 00:38:19.112 "compare": true, 00:38:19.112 "compare_and_write": true, 00:38:19.112 "abort": true, 00:38:19.112 "seek_hole": false, 00:38:19.112 "seek_data": false, 00:38:19.112 "copy": true, 00:38:19.112 "nvme_iov_md": false 00:38:19.112 }, 00:38:19.112 "memory_domains": [ 00:38:19.112 { 00:38:19.112 "dma_device_id": "system", 00:38:19.112 "dma_device_type": 1 00:38:19.112 } 00:38:19.112 ], 00:38:19.112 "driver_specific": { 00:38:19.112 "nvme": [ 00:38:19.112 { 00:38:19.112 "trid": { 00:38:19.112 "trtype": "TCP", 00:38:19.112 "adrfam": "IPv4", 00:38:19.112 "traddr": "10.0.0.2", 00:38:19.112 "trsvcid": "4420", 00:38:19.112 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:19.112 }, 00:38:19.112 "ctrlr_data": { 00:38:19.112 "cntlid": 1, 00:38:19.112 "vendor_id": "0x8086", 00:38:19.112 "model_number": "SPDK bdev Controller", 00:38:19.112 "serial_number": "SPDK0", 00:38:19.112 "firmware_revision": "25.01", 00:38:19.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.112 "oacs": { 00:38:19.112 "security": 0, 00:38:19.112 "format": 0, 00:38:19.112 "firmware": 0, 00:38:19.112 "ns_manage": 0 00:38:19.112 }, 00:38:19.112 "multi_ctrlr": true, 00:38:19.112 "ana_reporting": false 00:38:19.112 }, 00:38:19.112 "vs": { 00:38:19.112 "nvme_version": "1.3" 00:38:19.112 }, 00:38:19.112 "ns_data": { 00:38:19.112 "id": 1, 00:38:19.112 "can_share": true 00:38:19.112 } 00:38:19.112 } 00:38:19.112 ], 00:38:19.112 "mp_policy": "active_passive" 00:38:19.112 } 00:38:19.112 } 00:38:19.112 ] 00:38:19.112 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=576007 00:38:19.112 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:19.112 22:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:19.112 Running I/O for 10 seconds... 00:38:20.490 Latency(us) 00:38:20.490 [2024-12-14T21:47:41.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.490 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:20.490 [2024-12-14T21:47:41.374Z] =================================================================================================================== 00:38:20.490 [2024-12-14T21:47:41.374Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:20.490 00:38:21.057 22:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:21.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.316 Nvme0n1 : 2.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:21.316 [2024-12-14T21:47:42.200Z] =================================================================================================================== 00:38:21.316 [2024-12-14T21:47:42.200Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:21.316 00:38:21.316 true 00:38:21.316 22:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:21.316 22:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:21.575 22:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:21.575 22:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:21.575 22:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 576007 00:38:22.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.143 Nvme0n1 : 3.00 23304.67 91.03 0.00 0.00 0.00 0.00 0.00 00:38:22.143 [2024-12-14T21:47:43.027Z] =================================================================================================================== 00:38:22.143 [2024-12-14T21:47:43.027Z] Total : 23304.67 91.03 0.00 0.00 0.00 0.00 0.00 00:38:22.143 00:38:23.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.521 Nvme0n1 : 4.00 23415.75 91.47 0.00 0.00 0.00 0.00 0.00 00:38:23.521 [2024-12-14T21:47:44.405Z] =================================================================================================================== 00:38:23.521 [2024-12-14T21:47:44.405Z] Total : 23415.75 91.47 0.00 0.00 0.00 0.00 0.00 00:38:23.521 00:38:24.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.457 Nvme0n1 : 5.00 23482.40 91.73 0.00 0.00 0.00 0.00 0.00 00:38:24.457 [2024-12-14T21:47:45.341Z] =================================================================================================================== 00:38:24.457 [2024-12-14T21:47:45.341Z] Total : 23482.40 91.73 0.00 0.00 0.00 0.00 0.00 00:38:24.457 00:38:25.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.391 Nvme0n1 : 6.00 23537.50 91.94 0.00 0.00 0.00 0.00 0.00 00:38:25.391 [2024-12-14T21:47:46.275Z] =================================================================================================================== 00:38:25.391 [2024-12-14T21:47:46.275Z] Total : 23537.50 91.94 0.00 0.00 0.00 0.00 0.00 00:38:25.391 00:38:26.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.328 Nvme0n1 : 7.00 23583.71 92.12 0.00 0.00 0.00 0.00 0.00 00:38:26.328 [2024-12-14T21:47:47.212Z] =================================================================================================================== 00:38:26.328 [2024-12-14T21:47:47.212Z] Total : 23583.71 92.12 0.00 0.00 0.00 0.00 0.00 00:38:26.328 00:38:27.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.266 Nvme0n1 : 8.00 23612.38 92.24 0.00 0.00 0.00 0.00 0.00 00:38:27.266 [2024-12-14T21:47:48.150Z] =================================================================================================================== 00:38:27.266 [2024-12-14T21:47:48.150Z] Total : 23612.38 92.24 0.00 0.00 0.00 0.00 0.00 00:38:27.266 00:38:28.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.202 Nvme0n1 : 9.00 23638.33 92.34 0.00 0.00 0.00 0.00 0.00 00:38:28.202 [2024-12-14T21:47:49.086Z] =================================================================================================================== 00:38:28.202 [2024-12-14T21:47:49.086Z] Total : 23638.33 92.34 0.00 0.00 0.00 0.00 0.00 00:38:28.202 00:38:29.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.139 Nvme0n1 : 10.00 23655.80 92.41 0.00 0.00 0.00 0.00 0.00 00:38:29.139 [2024-12-14T21:47:50.023Z] =================================================================================================================== 00:38:29.139 [2024-12-14T21:47:50.023Z] Total : 23655.80 92.41 0.00 0.00 0.00 0.00 0.00 00:38:29.139 00:38:29.399 00:38:29.399 Latency(us) 00:38:29.399 [2024-12-14T21:47:50.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.399 Nvme0n1 : 10.00 23659.05 92.42 0.00 0.00 5407.09 3136.37 26838.55 00:38:29.399 [2024-12-14T21:47:50.283Z] =================================================================================================================== 00:38:29.399 [2024-12-14T21:47:50.283Z] Total : 23659.05 92.42 0.00 0.00 5407.09 3136.37 26838.55 00:38:29.399 { 00:38:29.399 "results": [ 00:38:29.399 { 00:38:29.399 "job": "Nvme0n1", 00:38:29.399 "core_mask": "0x2", 00:38:29.399 "workload": "randwrite", 00:38:29.399 "status": "finished", 00:38:29.399 "queue_depth": 128, 00:38:29.399 "io_size": 4096, 00:38:29.399 "runtime": 10.004036, 00:38:29.399 "iops": 23659.051206932883, 00:38:29.399 "mibps": 92.41816877708158, 00:38:29.399 "io_failed": 0, 00:38:29.399 "io_timeout": 0, 00:38:29.399 "avg_latency_us": 5407.092021569263, 00:38:29.399 "min_latency_us": 3136.365714285714, 00:38:29.399 "max_latency_us": 26838.55238095238 00:38:29.399 } 00:38:29.399 ], 00:38:29.399 "core_count": 1 00:38:29.399 } 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 575915 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 575915 ']' 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 575915 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 575915 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 575915' 00:38:29.399 killing process with pid 575915 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 575915 00:38:29.399 Received shutdown signal, test time was about 10.000000 seconds 00:38:29.399 00:38:29.399 Latency(us) 00:38:29.399 [2024-12-14T21:47:50.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.399 [2024-12-14T21:47:50.283Z] =================================================================================================================== 00:38:29.399 [2024-12-14T21:47:50.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 575915 00:38:29.399 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:29.659 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:29.917 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:29.918 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:30.177 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:30.177 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:30.177 22:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:30.177 [2024-12-14 22:47:51.019579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:30.177 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:30.435 request: 00:38:30.435 { 00:38:30.435 "uuid": "ec405035-5e73-4742-8d7e-b357428e7302", 00:38:30.435 "method": "bdev_lvol_get_lvstores", 00:38:30.435 "req_id": 1 00:38:30.435 } 00:38:30.435 Got JSON-RPC error response 00:38:30.435 response: 00:38:30.435 { 00:38:30.435 "code": -19, 00:38:30.435 "message": "No such device" 00:38:30.435 } 00:38:30.435 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:30.435 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:30.435 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:30.435 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:30.435 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:30.693 aio_bdev 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:30.693 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:30.951 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 -t 2000 00:38:30.951 [ 00:38:30.951 { 00:38:30.951 "name": "e928c557-0f3c-4b3f-8a0c-c5ec0f699d64", 00:38:30.951 "aliases": [ 00:38:30.951 "lvs/lvol" 00:38:30.951 ], 00:38:30.951 "product_name": "Logical Volume", 00:38:30.951 "block_size": 4096, 00:38:30.951 "num_blocks": 38912, 00:38:30.951 "uuid": "e928c557-0f3c-4b3f-8a0c-c5ec0f699d64", 00:38:30.951 "assigned_rate_limits": { 00:38:30.951 "rw_ios_per_sec": 0, 00:38:30.951 "rw_mbytes_per_sec": 0, 00:38:30.951 "r_mbytes_per_sec": 0, 00:38:30.951 "w_mbytes_per_sec": 0 00:38:30.951 }, 00:38:30.951 "claimed": false, 00:38:30.951 "zoned": false, 00:38:30.951 "supported_io_types": { 00:38:30.951 "read": true, 00:38:30.951 "write": true, 00:38:30.951 "unmap": true, 00:38:30.951 "flush": false, 00:38:30.951 "reset": true, 00:38:30.951 "nvme_admin": false, 00:38:30.951 "nvme_io": false, 00:38:30.951 "nvme_io_md": false, 00:38:30.951 "write_zeroes": true, 00:38:30.951 "zcopy": false, 00:38:30.951 "get_zone_info": false, 00:38:30.951 "zone_management": false, 00:38:30.951 "zone_append": false, 00:38:30.951 "compare": false, 00:38:30.951 "compare_and_write": false, 00:38:30.951 "abort": false, 00:38:30.951 "seek_hole": true, 00:38:30.951 "seek_data": true, 00:38:30.951 "copy": false, 00:38:30.951 "nvme_iov_md": false 00:38:30.951 }, 00:38:30.951 "driver_specific": { 00:38:30.951 "lvol": { 00:38:30.952 "lvol_store_uuid": "ec405035-5e73-4742-8d7e-b357428e7302", 00:38:30.952 "base_bdev": "aio_bdev", 00:38:30.952 "thin_provision": false, 00:38:30.952 "num_allocated_clusters": 38, 00:38:30.952 "snapshot": false, 00:38:30.952 "clone": false, 00:38:30.952 "esnap_clone": false 00:38:30.952 } 00:38:30.952 } 00:38:30.952 } 00:38:30.952 ] 00:38:31.210 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:31.210 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:31.210 22:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:31.210 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:31.210 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:31.210 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:31.469 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:31.469 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e928c557-0f3c-4b3f-8a0c-c5ec0f699d64 00:38:31.728 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec405035-5e73-4742-8d7e-b357428e7302 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:31.987 00:38:31.987 real 0m15.663s 00:38:31.987 user 0m15.202s 00:38:31.987 sys 0m1.477s 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:31.987 ************************************ 00:38:31.987 END TEST lvs_grow_clean 00:38:31.987 ************************************ 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.987 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:32.246 ************************************ 00:38:32.246 START TEST lvs_grow_dirty 00:38:32.246 ************************************ 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:32.246 22:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:32.505 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:32.505 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:32.505 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:32.505 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:32.505 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:32.764 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:32.764 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:32.764 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4b6690d-365a-4bef-b970-be2a6acf27cd lvol 150 00:38:33.022 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=73daa15e-0286-4489-a740-4236f3171336 00:38:33.022 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:33.022 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:33.022 [2024-12-14 22:47:53.895549] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:33.022 [2024-12-14 22:47:53.895684] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:33.022 true 00:38:33.281 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:33.281 22:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:33.281 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:33.281 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:33.540 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73daa15e-0286-4489-a740-4236f3171336 00:38:33.799 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:33.799 [2024-12-14 22:47:54.663961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=578425 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 578425 /var/tmp/bdevperf.sock 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 578425 ']' 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:34.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.058 22:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:34.058 [2024-12-14 22:47:54.925889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:34.058 [2024-12-14 22:47:54.925945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578425 ] 00:38:34.316 [2024-12-14 22:47:54.996257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.316 [2024-12-14 22:47:55.018582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.316 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.316 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:34.316 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:34.903 Nvme0n1 00:38:34.903 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:34.903 [ 00:38:34.903 { 00:38:34.904 "name": "Nvme0n1", 00:38:34.904 "aliases": [ 00:38:34.904 "73daa15e-0286-4489-a740-4236f3171336" 00:38:34.904 ], 00:38:34.904 "product_name": "NVMe disk", 00:38:34.904 "block_size": 4096, 00:38:34.904 "num_blocks": 38912, 00:38:34.904 "uuid": "73daa15e-0286-4489-a740-4236f3171336", 00:38:34.904 "numa_id": 1, 00:38:34.904 "assigned_rate_limits": { 00:38:34.904 "rw_ios_per_sec": 0, 00:38:34.904 "rw_mbytes_per_sec": 0, 00:38:34.904 "r_mbytes_per_sec": 0, 00:38:34.904 "w_mbytes_per_sec": 0 00:38:34.904 }, 00:38:34.904 "claimed": false, 00:38:34.904 "zoned": false, 00:38:34.904 "supported_io_types": { 00:38:34.904 "read": true, 00:38:34.904 "write": true, 00:38:34.904 "unmap": true, 00:38:34.904 "flush": true, 00:38:34.904 "reset": true, 00:38:34.904 "nvme_admin": true, 00:38:34.904 "nvme_io": true, 00:38:34.904 "nvme_io_md": false, 00:38:34.904 "write_zeroes": true, 00:38:34.904 "zcopy": false, 00:38:34.904 "get_zone_info": false, 00:38:34.904 "zone_management": false, 00:38:34.904 "zone_append": false, 00:38:34.904 "compare": true, 00:38:34.904 "compare_and_write": true, 00:38:34.904 "abort": true, 00:38:34.904 "seek_hole": false, 00:38:34.904 "seek_data": false, 00:38:34.904 "copy": true, 00:38:34.904 "nvme_iov_md": false 00:38:34.904 }, 00:38:34.904 "memory_domains": [ 00:38:34.904 { 00:38:34.904 "dma_device_id": "system", 00:38:34.904 "dma_device_type": 1 00:38:34.904 } 00:38:34.904 ], 00:38:34.904 "driver_specific": { 00:38:34.904 "nvme": [ 00:38:34.904 { 00:38:34.904 "trid": { 00:38:34.904 "trtype": "TCP", 00:38:34.904 "adrfam": "IPv4", 00:38:34.904 "traddr": "10.0.0.2", 00:38:34.904 "trsvcid": "4420", 00:38:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:34.904 }, 00:38:34.904 "ctrlr_data": { 00:38:34.904 "cntlid": 1, 00:38:34.904 "vendor_id": "0x8086", 00:38:34.904 "model_number": "SPDK bdev Controller", 00:38:34.904 "serial_number": "SPDK0", 00:38:34.904 "firmware_revision": "25.01", 00:38:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:34.904 "oacs": { 00:38:34.904 "security": 0, 00:38:34.904 "format": 0, 00:38:34.904 "firmware": 0, 00:38:34.904 "ns_manage": 0 00:38:34.904 }, 00:38:34.904 "multi_ctrlr": true, 00:38:34.904 "ana_reporting": false 00:38:34.904 }, 00:38:34.904 "vs": { 00:38:34.904 "nvme_version": "1.3" 00:38:34.904 }, 00:38:34.904 "ns_data": { 00:38:34.904 "id": 1, 00:38:34.904 "can_share": true 00:38:34.904 } 00:38:34.904 } 00:38:34.904 ], 00:38:34.904 "mp_policy": "active_passive" 00:38:34.904 } 00:38:34.904 } 00:38:34.904 ] 00:38:34.904 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578515 00:38:34.904 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:34.904 22:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:35.219 Running I/O for 10 seconds... 00:38:36.240 Latency(us) 00:38:36.240 [2024-12-14T21:47:57.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.240 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:36.240 [2024-12-14T21:47:57.124Z] =================================================================================================================== 00:38:36.240 [2024-12-14T21:47:57.124Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:36.240 00:38:36.826 22:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:37.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.084 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:37.084 [2024-12-14T21:47:57.968Z] =================================================================================================================== 00:38:37.084 [2024-12-14T21:47:57.968Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:37.084 00:38:37.084 true 00:38:37.084 22:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:37.084 22:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:37.343 22:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:37.343 22:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:37.343 22:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578515 00:38:37.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.910 Nvme0n1 : 3.00 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:38:37.910 [2024-12-14T21:47:58.794Z] =================================================================================================================== 00:38:37.910 [2024-12-14T21:47:58.794Z] Total : 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:38:37.910 00:38:39.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:39.287 Nvme0n1 : 4.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:38:39.287 [2024-12-14T21:48:00.171Z] =================================================================================================================== 00:38:39.287 [2024-12-14T21:48:00.171Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:38:39.287 00:38:40.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.228 Nvme0n1 : 5.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:40.228 [2024-12-14T21:48:01.112Z] =================================================================================================================== 00:38:40.228 [2024-12-14T21:48:01.112Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:40.228 00:38:41.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.163 Nvme0n1 : 6.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:38:41.163 [2024-12-14T21:48:02.047Z] =================================================================================================================== 00:38:41.163 [2024-12-14T21:48:02.047Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:38:41.163 00:38:42.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.101 Nvme0n1 : 7.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:38:42.101 [2024-12-14T21:48:02.985Z] =================================================================================================================== 00:38:42.101 [2024-12-14T21:48:02.985Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:38:42.101 00:38:43.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.037 Nvme0n1 : 8.00 23526.75 91.90 0.00 0.00 0.00 0.00 0.00 00:38:43.037 [2024-12-14T21:48:03.921Z] =================================================================================================================== 00:38:43.037 [2024-12-14T21:48:03.921Z] Total : 23526.75 91.90 0.00 0.00 0.00 0.00 0.00 00:38:43.037 00:38:43.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.975 Nvme0n1 : 9.00 23565.56 92.05 0.00 0.00 0.00 0.00 0.00 00:38:43.975 [2024-12-14T21:48:04.859Z] =================================================================================================================== 00:38:43.975 [2024-12-14T21:48:04.859Z] Total : 23565.56 92.05 0.00 0.00 0.00 0.00 0.00 00:38:43.975 00:38:45.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.352 Nvme0n1 : 10.00 23596.60 92.17 0.00 0.00 0.00 0.00 0.00 00:38:45.352 [2024-12-14T21:48:06.236Z] =================================================================================================================== 00:38:45.352 [2024-12-14T21:48:06.236Z] Total : 23596.60 92.17 0.00 0.00 0.00 0.00 0.00 00:38:45.352 00:38:45.352 00:38:45.352 Latency(us) 00:38:45.352 [2024-12-14T21:48:06.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.352 Nvme0n1 : 10.00 23597.99 92.18 0.00 0.00 5421.28 4681.14 26588.89 00:38:45.352 [2024-12-14T21:48:06.236Z] =================================================================================================================== 00:38:45.352 [2024-12-14T21:48:06.236Z] Total : 23597.99 92.18 0.00 0.00 5421.28 4681.14 26588.89 00:38:45.352 { 00:38:45.352 "results": [ 00:38:45.352 { 00:38:45.352 "job": "Nvme0n1", 00:38:45.352 "core_mask": "0x2", 00:38:45.352 "workload": "randwrite", 00:38:45.352 "status": "finished", 00:38:45.352 "queue_depth": 128, 00:38:45.352 "io_size": 4096, 00:38:45.352 "runtime": 10.004834, 00:38:45.352 "iops": 23597.992730314167, 00:38:45.352 "mibps": 92.17965910278971, 00:38:45.352 "io_failed": 0, 00:38:45.352 "io_timeout": 0, 00:38:45.352 "avg_latency_us": 5421.276391235614, 00:38:45.352 "min_latency_us": 4681.142857142857, 00:38:45.352 "max_latency_us": 26588.891428571427 00:38:45.352 } 00:38:45.352 ], 00:38:45.352 "core_count": 1 00:38:45.352 } 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 578425 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 578425 ']' 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 578425 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578425 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578425' 00:38:45.352 killing process with pid 578425 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 578425 00:38:45.352 Received shutdown signal, test time was about 10.000000 seconds 00:38:45.352 00:38:45.352 Latency(us) 00:38:45.352 [2024-12-14T21:48:06.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.352 [2024-12-14T21:48:06.236Z] =================================================================================================================== 00:38:45.352 [2024-12-14T21:48:06.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:45.352 22:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 578425 00:38:45.352 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:45.352 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:45.611 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:45.611 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575438 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575438 00:38:45.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575438 Killed "${NVMF_APP[@]}" "$@" 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=580537 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 580537 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 580537 ']' 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.870 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.871 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.871 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.871 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:45.871 [2024-12-14 22:48:06.714606] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:45.871 [2024-12-14 22:48:06.715555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:45.871 [2024-12-14 22:48:06.715593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:46.130 [2024-12-14 22:48:06.796121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.130 [2024-12-14 22:48:06.816768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:46.130 [2024-12-14 22:48:06.816804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:46.130 [2024-12-14 22:48:06.816811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:46.130 [2024-12-14 22:48:06.816819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:46.130 [2024-12-14 22:48:06.816824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:46.130 [2024-12-14 22:48:06.817345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.130 [2024-12-14 22:48:06.879082] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:46.130 [2024-12-14 22:48:06.879280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.130 22:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:46.389 [2024-12-14 22:48:07.134765] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:46.389 [2024-12-14 22:48:07.134984] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:46.389 [2024-12-14 22:48:07.135073] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 73daa15e-0286-4489-a740-4236f3171336 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=73daa15e-0286-4489-a740-4236f3171336 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:46.389 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:46.648 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73daa15e-0286-4489-a740-4236f3171336 -t 2000 00:38:46.907 [ 00:38:46.907 { 00:38:46.907 "name": "73daa15e-0286-4489-a740-4236f3171336", 00:38:46.907 "aliases": [ 00:38:46.907 "lvs/lvol" 00:38:46.907 ], 00:38:46.907 "product_name": "Logical Volume", 00:38:46.907 "block_size": 4096, 00:38:46.907 "num_blocks": 38912, 00:38:46.907 "uuid": "73daa15e-0286-4489-a740-4236f3171336", 00:38:46.907 "assigned_rate_limits": { 00:38:46.907 "rw_ios_per_sec": 0, 00:38:46.907 "rw_mbytes_per_sec": 0, 00:38:46.907 "r_mbytes_per_sec": 0, 00:38:46.907 "w_mbytes_per_sec": 0 00:38:46.907 }, 00:38:46.907 "claimed": false, 00:38:46.907 "zoned": false, 00:38:46.907 "supported_io_types": { 00:38:46.907 "read": true, 00:38:46.907 "write": true, 00:38:46.907 "unmap": true, 00:38:46.907 "flush": false, 00:38:46.907 "reset": true, 00:38:46.907 "nvme_admin": false, 00:38:46.907 "nvme_io": false, 00:38:46.907 "nvme_io_md": false, 00:38:46.907 "write_zeroes": true, 00:38:46.907 "zcopy": false, 00:38:46.907 "get_zone_info": false, 00:38:46.907 "zone_management": false, 00:38:46.907 "zone_append": false, 00:38:46.907 "compare": false, 00:38:46.907 "compare_and_write": false, 00:38:46.907 "abort": false, 00:38:46.907 "seek_hole": true, 00:38:46.907 "seek_data": true, 00:38:46.907 "copy": false, 00:38:46.907 "nvme_iov_md": false 00:38:46.907 }, 00:38:46.907 "driver_specific": { 00:38:46.907 "lvol": { 00:38:46.907 "lvol_store_uuid": "f4b6690d-365a-4bef-b970-be2a6acf27cd", 00:38:46.907 "base_bdev": "aio_bdev", 00:38:46.907 "thin_provision": false, 00:38:46.907 "num_allocated_clusters": 38, 00:38:46.907 "snapshot": false, 00:38:46.907 "clone": false, 00:38:46.907 "esnap_clone": false 00:38:46.907 } 00:38:46.907 } 00:38:46.907 } 00:38:46.907 ] 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:46.907 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:47.166 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:47.166 22:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:47.425 [2024-12-14 22:48:08.097780] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:47.425 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:47.425 request: 00:38:47.425 { 00:38:47.425 "uuid": "f4b6690d-365a-4bef-b970-be2a6acf27cd", 00:38:47.425 "method": "bdev_lvol_get_lvstores", 00:38:47.425 "req_id": 1 00:38:47.425 } 00:38:47.425 Got JSON-RPC error response 00:38:47.425 response: 00:38:47.425 { 00:38:47.425 "code": -19, 00:38:47.425 "message": "No such device" 00:38:47.425 } 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:47.683 aio_bdev 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73daa15e-0286-4489-a740-4236f3171336 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=73daa15e-0286-4489-a740-4236f3171336 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.683 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:47.942 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73daa15e-0286-4489-a740-4236f3171336 -t 2000 00:38:48.201 [ 00:38:48.201 { 00:38:48.201 "name": "73daa15e-0286-4489-a740-4236f3171336", 00:38:48.201 "aliases": [ 00:38:48.201 "lvs/lvol" 00:38:48.201 ], 00:38:48.201 "product_name": "Logical Volume", 00:38:48.201 "block_size": 4096, 00:38:48.201 "num_blocks": 38912, 00:38:48.201 "uuid": "73daa15e-0286-4489-a740-4236f3171336", 00:38:48.201 "assigned_rate_limits": { 00:38:48.201 "rw_ios_per_sec": 0, 00:38:48.201 "rw_mbytes_per_sec": 0, 00:38:48.201 "r_mbytes_per_sec": 0, 00:38:48.201 "w_mbytes_per_sec": 0 00:38:48.201 }, 00:38:48.201 "claimed": false, 00:38:48.201 "zoned": false, 00:38:48.201 "supported_io_types": { 00:38:48.201 "read": true, 00:38:48.201 "write": true, 00:38:48.201 "unmap": true, 00:38:48.201 "flush": false, 00:38:48.201 "reset": true, 00:38:48.201 "nvme_admin": false, 00:38:48.201 "nvme_io": false, 00:38:48.201 "nvme_io_md": false, 00:38:48.201 "write_zeroes": true, 00:38:48.201 "zcopy": false, 00:38:48.201 "get_zone_info": false, 00:38:48.201 "zone_management": false, 00:38:48.201 "zone_append": false, 00:38:48.201 "compare": false, 00:38:48.201 "compare_and_write": false, 00:38:48.201 "abort": false, 00:38:48.201 "seek_hole": true, 00:38:48.201 "seek_data": true, 00:38:48.201 "copy": false, 00:38:48.201 "nvme_iov_md": false 00:38:48.201 }, 00:38:48.201 "driver_specific": { 00:38:48.201 "lvol": { 00:38:48.201 "lvol_store_uuid": "f4b6690d-365a-4bef-b970-be2a6acf27cd", 00:38:48.201 "base_bdev": "aio_bdev", 00:38:48.201 "thin_provision": false, 00:38:48.201 "num_allocated_clusters": 38, 00:38:48.201 "snapshot": false, 00:38:48.201 "clone": false, 00:38:48.201 "esnap_clone": false 00:38:48.201 } 00:38:48.201 } 00:38:48.201 } 00:38:48.201 ] 00:38:48.201 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:48.201 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:48.201 22:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:48.460 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:48.460 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:48.460 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:48.460 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:48.460 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73daa15e-0286-4489-a740-4236f3171336 00:38:48.719 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4b6690d-365a-4bef-b970-be2a6acf27cd 00:38:48.977 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:48.978 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:48.978 00:38:48.978 real 0m16.958s 00:38:48.978 user 0m34.267s 00:38:48.978 sys 0m3.913s 00:38:48.978 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.978 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:48.978 ************************************ 00:38:48.978 END TEST lvs_grow_dirty 00:38:49.236 ************************************ 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:49.236 nvmf_trace.0 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:49.236 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.237 22:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.237 rmmod nvme_tcp 00:38:49.237 rmmod nvme_fabrics 00:38:49.237 rmmod nvme_keyring 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 580537 ']' 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 580537 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 580537 ']' 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 580537 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580537 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580537' 00:38:49.237 killing process with pid 580537 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 580537 00:38:49.237 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 580537 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.496 22:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.032 00:38:52.032 real 0m41.748s 00:38:52.032 user 0m51.944s 00:38:52.032 sys 0m10.243s 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:52.032 ************************************ 00:38:52.032 END TEST nvmf_lvs_grow 00:38:52.032 ************************************ 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.032 ************************************ 00:38:52.032 START TEST nvmf_bdev_io_wait 00:38:52.032 ************************************ 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:52.032 * Looking for test storage... 00:38:52.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:52.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.032 --rc genhtml_branch_coverage=1 00:38:52.032 --rc genhtml_function_coverage=1 00:38:52.032 --rc genhtml_legend=1 00:38:52.032 --rc geninfo_all_blocks=1 00:38:52.032 --rc geninfo_unexecuted_blocks=1 00:38:52.032 00:38:52.032 ' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:52.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.032 --rc genhtml_branch_coverage=1 00:38:52.032 --rc genhtml_function_coverage=1 00:38:52.032 --rc genhtml_legend=1 00:38:52.032 --rc geninfo_all_blocks=1 00:38:52.032 --rc geninfo_unexecuted_blocks=1 00:38:52.032 00:38:52.032 ' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:52.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.032 --rc genhtml_branch_coverage=1 00:38:52.032 --rc genhtml_function_coverage=1 00:38:52.032 --rc genhtml_legend=1 00:38:52.032 --rc geninfo_all_blocks=1 00:38:52.032 --rc geninfo_unexecuted_blocks=1 00:38:52.032 00:38:52.032 ' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:52.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.032 --rc genhtml_branch_coverage=1 00:38:52.032 --rc genhtml_function_coverage=1 00:38:52.032 --rc genhtml_legend=1 00:38:52.032 --rc geninfo_all_blocks=1 00:38:52.032 --rc geninfo_unexecuted_blocks=1 00:38:52.032 00:38:52.032 ' 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.032 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.033 22:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:57.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.307 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:57.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:57.308 Found net devices under 0000:af:00.0: cvl_0_0 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:57.308 Found net devices under 0000:af:00.1: cvl_0_1 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.308 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:38:57.567 00:38:57.567 --- 10.0.0.2 ping statistics --- 00:38:57.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.567 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:38:57.567 00:38:57.567 --- 10.0.0.1 ping statistics --- 00:38:57.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.567 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:57.567 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=584720 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 584720 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 584720 ']' 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.568 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.568 [2024-12-14 22:48:18.431861] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.568 [2024-12-14 22:48:18.432747] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:57.568 [2024-12-14 22:48:18.432779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.827 [2024-12-14 22:48:18.511545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:57.827 [2024-12-14 22:48:18.535894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.827 [2024-12-14 22:48:18.535931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.827 [2024-12-14 22:48:18.535941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.827 [2024-12-14 22:48:18.535949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.827 [2024-12-14 22:48:18.535956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.827 [2024-12-14 22:48:18.537201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.827 [2024-12-14 22:48:18.537231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:57.827 [2024-12-14 22:48:18.537336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.827 [2024-12-14 22:48:18.537337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:57.827 [2024-12-14 22:48:18.537737] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.827 [2024-12-14 22:48:18.690309] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.827 [2024-12-14 22:48:18.690992] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:57.827 [2024-12-14 22:48:18.691246] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:57.827 [2024-12-14 22:48:18.691368] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.827 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:57.827 [2024-12-14 22:48:18.702224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:58.087 Malloc0 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:58.087 [2024-12-14 22:48:18.778460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=584839 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=584842 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:58.087 { 00:38:58.087 "params": { 00:38:58.087 "name": "Nvme$subsystem", 00:38:58.087 "trtype": "$TEST_TRANSPORT", 00:38:58.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.087 "adrfam": "ipv4", 00:38:58.087 "trsvcid": "$NVMF_PORT", 00:38:58.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.087 "hdgst": ${hdgst:-false}, 00:38:58.087 "ddgst": ${ddgst:-false} 00:38:58.087 }, 00:38:58.087 "method": "bdev_nvme_attach_controller" 00:38:58.087 } 00:38:58.087 EOF 00:38:58.087 )") 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=584845 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:58.087 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:58.087 { 00:38:58.087 "params": { 00:38:58.087 "name": "Nvme$subsystem", 00:38:58.087 "trtype": "$TEST_TRANSPORT", 00:38:58.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.087 "adrfam": "ipv4", 00:38:58.087 "trsvcid": "$NVMF_PORT", 00:38:58.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.087 "hdgst": ${hdgst:-false}, 00:38:58.087 "ddgst": ${ddgst:-false} 00:38:58.087 }, 00:38:58.087 "method": "bdev_nvme_attach_controller" 00:38:58.087 } 00:38:58.087 EOF 00:38:58.088 )") 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=584849 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:58.088 { 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme$subsystem", 00:38:58.088 "trtype": "$TEST_TRANSPORT", 00:38:58.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "$NVMF_PORT", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.088 "hdgst": ${hdgst:-false}, 00:38:58.088 "ddgst": ${ddgst:-false} 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 } 00:38:58.088 EOF 00:38:58.088 )") 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:58.088 { 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme$subsystem", 00:38:58.088 "trtype": "$TEST_TRANSPORT", 00:38:58.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "$NVMF_PORT", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.088 "hdgst": ${hdgst:-false}, 00:38:58.088 "ddgst": ${ddgst:-false} 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 } 00:38:58.088 EOF 00:38:58.088 )") 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 584839 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme1", 00:38:58.088 "trtype": "tcp", 00:38:58.088 "traddr": "10.0.0.2", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "4420", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.088 "hdgst": false, 00:38:58.088 "ddgst": false 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 }' 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme1", 00:38:58.088 "trtype": "tcp", 00:38:58.088 "traddr": "10.0.0.2", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "4420", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.088 "hdgst": false, 00:38:58.088 "ddgst": false 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 }' 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme1", 00:38:58.088 "trtype": "tcp", 00:38:58.088 "traddr": "10.0.0.2", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "4420", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.088 "hdgst": false, 00:38:58.088 "ddgst": false 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 }' 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:58.088 22:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:58.088 "params": { 00:38:58.088 "name": "Nvme1", 00:38:58.088 "trtype": "tcp", 00:38:58.088 "traddr": "10.0.0.2", 00:38:58.088 "adrfam": "ipv4", 00:38:58.088 "trsvcid": "4420", 00:38:58.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.088 "hdgst": false, 00:38:58.088 "ddgst": false 00:38:58.088 }, 00:38:58.088 "method": "bdev_nvme_attach_controller" 00:38:58.088 }' 00:38:58.088 [2024-12-14 22:48:18.830020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:58.088 [2024-12-14 22:48:18.830079] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:58.088 [2024-12-14 22:48:18.833178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:58.088 [2024-12-14 22:48:18.833225] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:58.088 [2024-12-14 22:48:18.833425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:58.088 [2024-12-14 22:48:18.833466] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:58.088 [2024-12-14 22:48:18.835359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:58.088 [2024-12-14 22:48:18.835409] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:58.347 [2024-12-14 22:48:19.018751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.347 [2024-12-14 22:48:19.036255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:58.347 [2024-12-14 22:48:19.113502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.347 [2024-12-14 22:48:19.130728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:58.347 [2024-12-14 22:48:19.211055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.606 [2024-12-14 22:48:19.234617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:58.606 [2024-12-14 22:48:19.270311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.606 [2024-12-14 22:48:19.286133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:58.606 Running I/O for 1 seconds... 00:38:58.606 Running I/O for 1 seconds... 00:38:58.606 Running I/O for 1 seconds... 00:38:58.864 Running I/O for 1 seconds... 00:38:59.799 11753.00 IOPS, 45.91 MiB/s 00:38:59.799 Latency(us) 00:38:59.799 [2024-12-14T21:48:20.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.799 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:59.799 Nvme1n1 : 1.01 11814.75 46.15 0.00 0.00 10798.42 3620.08 12420.63 00:38:59.799 [2024-12-14T21:48:20.683Z] =================================================================================================================== 00:38:59.799 [2024-12-14T21:48:20.683Z] Total : 11814.75 46.15 0.00 0.00 10798.42 3620.08 12420.63 00:38:59.799 11265.00 IOPS, 44.00 MiB/s 00:38:59.799 Latency(us) 00:38:59.799 [2024-12-14T21:48:20.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.799 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:59.799 Nvme1n1 : 1.01 11342.07 44.30 0.00 0.00 11253.77 1685.21 13981.01 00:38:59.799 [2024-12-14T21:48:20.683Z] =================================================================================================================== 00:38:59.799 [2024-12-14T21:48:20.683Z] Total : 11342.07 44.30 0.00 0.00 11253.77 1685.21 13981.01 00:38:59.799 241224.00 IOPS, 942.28 MiB/s 00:38:59.799 Latency(us) 00:38:59.799 [2024-12-14T21:48:20.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.799 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:59.799 Nvme1n1 : 1.00 240851.45 940.83 0.00 0.00 528.55 224.30 1529.17 00:38:59.799 [2024-12-14T21:48:20.683Z] =================================================================================================================== 00:38:59.799 [2024-12-14T21:48:20.683Z] Total : 240851.45 940.83 0.00 0.00 528.55 224.30 1529.17 00:38:59.799 10469.00 IOPS, 40.89 MiB/s 00:38:59.799 Latency(us) 00:38:59.799 [2024-12-14T21:48:20.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.799 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:59.799 Nvme1n1 : 1.01 10540.35 41.17 0.00 0.00 12109.06 4275.44 17601.10 00:38:59.799 [2024-12-14T21:48:20.683Z] =================================================================================================================== 00:38:59.799 [2024-12-14T21:48:20.683Z] Total : 10540.35 41.17 0.00 0.00 12109.06 4275.44 17601.10 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 584842 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 584845 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 584849 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:59.799 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:00.058 rmmod nvme_tcp 00:39:00.058 rmmod nvme_fabrics 00:39:00.058 rmmod nvme_keyring 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 584720 ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 584720 ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584720' 00:39:00.058 killing process with pid 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 584720 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:00.058 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:00.059 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:00.059 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:00.317 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.317 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:00.317 22:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:02.222 00:39:02.222 real 0m10.610s 00:39:02.222 user 0m14.750s 00:39:02.222 sys 0m6.489s 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:02.222 ************************************ 00:39:02.222 END TEST nvmf_bdev_io_wait 00:39:02.222 ************************************ 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:02.222 ************************************ 00:39:02.222 START TEST nvmf_queue_depth 00:39:02.222 ************************************ 00:39:02.222 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:02.482 * Looking for test storage... 00:39:02.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:02.482 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:02.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.483 --rc genhtml_branch_coverage=1 00:39:02.483 --rc genhtml_function_coverage=1 00:39:02.483 --rc genhtml_legend=1 00:39:02.483 --rc geninfo_all_blocks=1 00:39:02.483 --rc geninfo_unexecuted_blocks=1 00:39:02.483 00:39:02.483 ' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:02.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.483 --rc genhtml_branch_coverage=1 00:39:02.483 --rc genhtml_function_coverage=1 00:39:02.483 --rc genhtml_legend=1 00:39:02.483 --rc geninfo_all_blocks=1 00:39:02.483 --rc geninfo_unexecuted_blocks=1 00:39:02.483 00:39:02.483 ' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:02.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.483 --rc genhtml_branch_coverage=1 00:39:02.483 --rc genhtml_function_coverage=1 00:39:02.483 --rc genhtml_legend=1 00:39:02.483 --rc geninfo_all_blocks=1 00:39:02.483 --rc geninfo_unexecuted_blocks=1 00:39:02.483 00:39:02.483 ' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:02.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.483 --rc genhtml_branch_coverage=1 00:39:02.483 --rc genhtml_function_coverage=1 00:39:02.483 --rc genhtml_legend=1 00:39:02.483 --rc geninfo_all_blocks=1 00:39:02.483 --rc geninfo_unexecuted_blocks=1 00:39:02.483 00:39:02.483 ' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:02.483 22:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:09.061 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:09.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:09.061 Found net devices under 0000:af:00.0: cvl_0_0 00:39:09.061 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:09.062 Found net devices under 0000:af:00.1: cvl_0_1 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:09.062 22:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:09.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:09.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:39:09.062 00:39:09.062 --- 10.0.0.2 ping statistics --- 00:39:09.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.062 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:09.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:09.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:39:09.062 00:39:09.062 --- 10.0.0.1 ping statistics --- 00:39:09.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.062 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=588653 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 588653 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588653 ']' 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.062 [2024-12-14 22:48:29.253110] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:09.062 [2024-12-14 22:48:29.254076] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:09.062 [2024-12-14 22:48:29.254115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.062 [2024-12-14 22:48:29.333227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.062 [2024-12-14 22:48:29.355044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.062 [2024-12-14 22:48:29.355079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.062 [2024-12-14 22:48:29.355087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.062 [2024-12-14 22:48:29.355092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.062 [2024-12-14 22:48:29.355097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.062 [2024-12-14 22:48:29.355577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.062 [2024-12-14 22:48:29.418080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:09.062 [2024-12-14 22:48:29.418280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.062 [2024-12-14 22:48:29.484310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:09.062 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 Malloc0 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 [2024-12-14 22:48:29.564309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=588693 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 588693 /var/tmp/bdevperf.sock 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588693 ']' 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:09.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 [2024-12-14 22:48:29.612971] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:09.063 [2024-12-14 22:48:29.613012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588693 ] 00:39:09.063 [2024-12-14 22:48:29.686102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.063 [2024-12-14 22:48:29.708772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.063 22:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.322 NVMe0n1 00:39:09.322 22:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.322 22:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:09.322 Running I/O for 10 seconds... 00:39:11.634 12288.00 IOPS, 48.00 MiB/s [2024-12-14T21:48:33.455Z] 12268.00 IOPS, 47.92 MiB/s [2024-12-14T21:48:34.391Z] 12391.67 IOPS, 48.40 MiB/s [2024-12-14T21:48:35.327Z] 12446.50 IOPS, 48.62 MiB/s [2024-12-14T21:48:36.260Z] 12505.00 IOPS, 48.85 MiB/s [2024-12-14T21:48:37.196Z] 12610.17 IOPS, 49.26 MiB/s [2024-12-14T21:48:38.573Z] 12594.43 IOPS, 49.20 MiB/s [2024-12-14T21:48:39.140Z] 12624.75 IOPS, 49.32 MiB/s [2024-12-14T21:48:40.517Z] 12633.78 IOPS, 49.35 MiB/s [2024-12-14T21:48:40.517Z] 12661.70 IOPS, 49.46 MiB/s 00:39:19.633 Latency(us) 00:39:19.633 [2024-12-14T21:48:40.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:19.633 Verification LBA range: start 0x0 length 0x4000 00:39:19.633 NVMe0n1 : 10.06 12685.17 49.55 0.00 0.00 80442.66 19223.89 55674.39 00:39:19.633 [2024-12-14T21:48:40.517Z] =================================================================================================================== 00:39:19.633 [2024-12-14T21:48:40.517Z] Total : 12685.17 49.55 0.00 0.00 80442.66 19223.89 55674.39 00:39:19.633 { 00:39:19.633 "results": [ 00:39:19.633 { 00:39:19.633 "job": "NVMe0n1", 00:39:19.633 "core_mask": "0x1", 00:39:19.633 "workload": "verify", 00:39:19.633 "status": "finished", 00:39:19.633 "verify_range": { 00:39:19.633 "start": 0, 00:39:19.633 "length": 16384 00:39:19.633 }, 00:39:19.633 "queue_depth": 1024, 00:39:19.633 "io_size": 4096, 00:39:19.633 "runtime": 10.06222, 00:39:19.633 "iops": 12685.172854499306, 00:39:19.634 "mibps": 49.55145646288791, 00:39:19.634 "io_failed": 0, 00:39:19.634 "io_timeout": 0, 00:39:19.634 "avg_latency_us": 80442.6572161132, 00:39:19.634 "min_latency_us": 19223.893333333333, 00:39:19.634 "max_latency_us": 55674.392380952384 00:39:19.634 } 00:39:19.634 ], 00:39:19.634 "core_count": 1 00:39:19.634 } 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 588693 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588693 ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588693 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588693 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588693' 00:39:19.634 killing process with pid 588693 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588693 00:39:19.634 Received shutdown signal, test time was about 10.000000 seconds 00:39:19.634 00:39:19.634 Latency(us) 00:39:19.634 [2024-12-14T21:48:40.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.634 [2024-12-14T21:48:40.518Z] =================================================================================================================== 00:39:19.634 [2024-12-14T21:48:40.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588693 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.634 rmmod nvme_tcp 00:39:19.634 rmmod nvme_fabrics 00:39:19.634 rmmod nvme_keyring 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 588653 ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 588653 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588653 ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588653 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.634 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588653 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588653' 00:39:19.893 killing process with pid 588653 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588653 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588653 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.893 22:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:22.429 00:39:22.429 real 0m19.733s 00:39:22.429 user 0m22.756s 00:39:22.429 sys 0m6.252s 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.429 ************************************ 00:39:22.429 END TEST nvmf_queue_depth 00:39:22.429 ************************************ 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.429 ************************************ 00:39:22.429 START TEST nvmf_target_multipath 00:39:22.429 ************************************ 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:22.429 * Looking for test storage... 00:39:22.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:22.429 22:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:22.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.429 --rc genhtml_branch_coverage=1 00:39:22.429 --rc genhtml_function_coverage=1 00:39:22.429 --rc genhtml_legend=1 00:39:22.429 --rc geninfo_all_blocks=1 00:39:22.429 --rc geninfo_unexecuted_blocks=1 00:39:22.429 00:39:22.429 ' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:22.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.429 --rc genhtml_branch_coverage=1 00:39:22.429 --rc genhtml_function_coverage=1 00:39:22.429 --rc genhtml_legend=1 00:39:22.429 --rc geninfo_all_blocks=1 00:39:22.429 --rc geninfo_unexecuted_blocks=1 00:39:22.429 00:39:22.429 ' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:22.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.429 --rc genhtml_branch_coverage=1 00:39:22.429 --rc genhtml_function_coverage=1 00:39:22.429 --rc genhtml_legend=1 00:39:22.429 --rc geninfo_all_blocks=1 00:39:22.429 --rc geninfo_unexecuted_blocks=1 00:39:22.429 00:39:22.429 ' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:22.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.429 --rc genhtml_branch_coverage=1 00:39:22.429 --rc genhtml_function_coverage=1 00:39:22.429 --rc genhtml_legend=1 00:39:22.429 --rc geninfo_all_blocks=1 00:39:22.429 --rc geninfo_unexecuted_blocks=1 00:39:22.429 00:39:22.429 ' 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.429 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:22.430 22:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:27.707 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:27.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:27.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:27.708 Found net devices under 0000:af:00.0: cvl_0_0 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:27.708 Found net devices under 0000:af:00.1: cvl_0_1 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:27.708 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:27.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:27.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:39:27.967 00:39:27.967 --- 10.0.0.2 ping statistics --- 00:39:27.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.967 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:27.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:27.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:39:27.967 00:39:27.967 --- 10.0.0.1 ping statistics --- 00:39:27.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.967 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:27.967 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:27.968 only one NIC for nvmf test 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:27.968 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:27.968 rmmod nvme_tcp 00:39:27.968 rmmod nvme_fabrics 00:39:27.968 rmmod nvme_keyring 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.227 22:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.133 00:39:30.133 real 0m8.112s 00:39:30.133 user 0m1.760s 00:39:30.133 sys 0m4.357s 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.133 22:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:30.133 ************************************ 00:39:30.133 END TEST nvmf_target_multipath 00:39:30.133 ************************************ 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:30.393 ************************************ 00:39:30.393 START TEST nvmf_zcopy 00:39:30.393 ************************************ 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:30.393 * Looking for test storage... 00:39:30.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:30.393 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:30.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.394 --rc genhtml_branch_coverage=1 00:39:30.394 --rc genhtml_function_coverage=1 00:39:30.394 --rc genhtml_legend=1 00:39:30.394 --rc geninfo_all_blocks=1 00:39:30.394 --rc geninfo_unexecuted_blocks=1 00:39:30.394 00:39:30.394 ' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:30.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.394 --rc genhtml_branch_coverage=1 00:39:30.394 --rc genhtml_function_coverage=1 00:39:30.394 --rc genhtml_legend=1 00:39:30.394 --rc geninfo_all_blocks=1 00:39:30.394 --rc geninfo_unexecuted_blocks=1 00:39:30.394 00:39:30.394 ' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:30.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.394 --rc genhtml_branch_coverage=1 00:39:30.394 --rc genhtml_function_coverage=1 00:39:30.394 --rc genhtml_legend=1 00:39:30.394 --rc geninfo_all_blocks=1 00:39:30.394 --rc geninfo_unexecuted_blocks=1 00:39:30.394 00:39:30.394 ' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:30.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.394 --rc genhtml_branch_coverage=1 00:39:30.394 --rc genhtml_function_coverage=1 00:39:30.394 --rc genhtml_legend=1 00:39:30.394 --rc geninfo_all_blocks=1 00:39:30.394 --rc geninfo_unexecuted_blocks=1 00:39:30.394 00:39:30.394 ' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:30.394 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:30.654 22:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:37.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:37.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:37.223 Found net devices under 0000:af:00.0: cvl_0_0 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.223 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:37.224 Found net devices under 0000:af:00.1: cvl_0_1 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.224 22:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:39:37.224 00:39:37.224 --- 10.0.0.2 ping statistics --- 00:39:37.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.224 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:39:37.224 00:39:37.224 --- 10.0.0.1 ping statistics --- 00:39:37.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.224 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=597169 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 597169 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 597169 ']' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 [2024-12-14 22:48:57.243239] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.224 [2024-12-14 22:48:57.244139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.224 [2024-12-14 22:48:57.244170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.224 [2024-12-14 22:48:57.321999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.224 [2024-12-14 22:48:57.342906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.224 [2024-12-14 22:48:57.342943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.224 [2024-12-14 22:48:57.342950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.224 [2024-12-14 22:48:57.342955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.224 [2024-12-14 22:48:57.342960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.224 [2024-12-14 22:48:57.343430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.224 [2024-12-14 22:48:57.404573] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:37.224 [2024-12-14 22:48:57.404774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 [2024-12-14 22:48:57.472165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.224 [2024-12-14 22:48:57.500310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.224 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.225 malloc0 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.225 { 00:39:37.225 "params": { 00:39:37.225 "name": "Nvme$subsystem", 00:39:37.225 "trtype": "$TEST_TRANSPORT", 00:39:37.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.225 "adrfam": "ipv4", 00:39:37.225 "trsvcid": "$NVMF_PORT", 00:39:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.225 "hdgst": ${hdgst:-false}, 00:39:37.225 "ddgst": ${ddgst:-false} 00:39:37.225 }, 00:39:37.225 "method": "bdev_nvme_attach_controller" 00:39:37.225 } 00:39:37.225 EOF 00:39:37.225 )") 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:37.225 22:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.225 "params": { 00:39:37.225 "name": "Nvme1", 00:39:37.225 "trtype": "tcp", 00:39:37.225 "traddr": "10.0.0.2", 00:39:37.225 "adrfam": "ipv4", 00:39:37.225 "trsvcid": "4420", 00:39:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.225 "hdgst": false, 00:39:37.225 "ddgst": false 00:39:37.225 }, 00:39:37.225 "method": "bdev_nvme_attach_controller" 00:39:37.225 }' 00:39:37.225 [2024-12-14 22:48:57.594393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.225 [2024-12-14 22:48:57.594436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597190 ] 00:39:37.225 [2024-12-14 22:48:57.668082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.225 [2024-12-14 22:48:57.690262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.225 Running I/O for 10 seconds... 00:39:39.097 8573.00 IOPS, 66.98 MiB/s [2024-12-14T21:49:00.917Z] 8644.50 IOPS, 67.54 MiB/s [2024-12-14T21:49:01.852Z] 8674.33 IOPS, 67.77 MiB/s [2024-12-14T21:49:03.228Z] 8661.00 IOPS, 67.66 MiB/s [2024-12-14T21:49:04.164Z] 8674.80 IOPS, 67.77 MiB/s [2024-12-14T21:49:05.100Z] 8687.00 IOPS, 67.87 MiB/s [2024-12-14T21:49:06.034Z] 8690.57 IOPS, 67.90 MiB/s [2024-12-14T21:49:06.969Z] 8700.38 IOPS, 67.97 MiB/s [2024-12-14T21:49:07.905Z] 8708.11 IOPS, 68.03 MiB/s [2024-12-14T21:49:07.905Z] 8709.90 IOPS, 68.05 MiB/s 00:39:47.021 Latency(us) 00:39:47.021 [2024-12-14T21:49:07.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:47.021 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:47.021 Verification LBA range: start 0x0 length 0x1000 00:39:47.021 Nvme1n1 : 10.01 8712.72 68.07 0.00 0.00 14649.37 353.04 20597.03 00:39:47.021 [2024-12-14T21:49:07.905Z] =================================================================================================================== 00:39:47.021 [2024-12-14T21:49:07.905Z] Total : 8712.72 68.07 0.00 0.00 14649.37 353.04 20597.03 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=598832 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:47.280 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:47.280 { 00:39:47.280 "params": { 00:39:47.280 "name": "Nvme$subsystem", 00:39:47.280 "trtype": "$TEST_TRANSPORT", 00:39:47.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:47.280 "adrfam": "ipv4", 00:39:47.280 "trsvcid": "$NVMF_PORT", 00:39:47.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:47.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:47.280 "hdgst": ${hdgst:-false}, 00:39:47.280 "ddgst": ${ddgst:-false} 00:39:47.280 }, 00:39:47.280 "method": "bdev_nvme_attach_controller" 00:39:47.280 } 00:39:47.280 EOF 00:39:47.280 )") 00:39:47.280 [2024-12-14 22:49:08.027764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.280 [2024-12-14 22:49:08.027794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:47.281 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:47.281 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:47.281 22:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:47.281 "params": { 00:39:47.281 "name": "Nvme1", 00:39:47.281 "trtype": "tcp", 00:39:47.281 "traddr": "10.0.0.2", 00:39:47.281 "adrfam": "ipv4", 00:39:47.281 "trsvcid": "4420", 00:39:47.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:47.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:47.281 "hdgst": false, 00:39:47.281 "ddgst": false 00:39:47.281 }, 00:39:47.281 "method": "bdev_nvme_attach_controller" 00:39:47.281 }' 00:39:47.281 [2024-12-14 22:49:08.039725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.039738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.051722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.051733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.063723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.063734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.066234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:47.281 [2024-12-14 22:49:08.066276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598832 ] 00:39:47.281 [2024-12-14 22:49:08.075722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.075734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.087725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.087739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.099722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.099732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.111722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.111733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.123721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.123732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.135724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.135738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.137843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.281 [2024-12-14 22:49:08.147729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.147743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.159727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.281 [2024-12-14 22:49:08.159743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.281 [2024-12-14 22:49:08.160219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.540 [2024-12-14 22:49:08.171730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.171747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.183730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.183748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.195727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.195741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.207724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.207736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.219727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.219741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.231723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.231733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.243735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.243754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.255732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.255747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.267729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.267745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.279729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.279745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.291730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.291746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.303730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.303746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 Running I/O for 5 seconds... 00:39:47.540 [2024-12-14 22:49:08.316622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.316640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.331431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.331451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.345638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.345657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.360830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.360848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.375981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.376000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.387079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.387098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.401619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.401638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.540 [2024-12-14 22:49:08.416090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.540 [2024-12-14 22:49:08.416107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.431399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.431418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.445312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.445331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.460192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.460209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.475576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.475595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.486715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.486733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.501438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.501456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.515814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.515833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.528510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.528528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.543713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.543732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.557967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.557987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.572438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.572456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.587655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.587676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.601683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.601702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.616463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.616482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.631647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.631667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.645010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.645029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.657268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.657286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.671924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.671959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.799 [2024-12-14 22:49:08.682353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.799 [2024-12-14 22:49:08.682371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.697216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.697235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.711666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.711685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.724765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.724783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.739668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.739687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.750525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.750543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.765069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.765087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.780179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.780198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.795821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.795845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.807883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.807907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.821363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.821382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.836096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.836115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.851365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.058 [2024-12-14 22:49:08.851385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.058 [2024-12-14 22:49:08.865608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.865628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.059 [2024-12-14 22:49:08.880420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.880439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.059 [2024-12-14 22:49:08.896316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.896336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.059 [2024-12-14 22:49:08.911522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.911541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.059 [2024-12-14 22:49:08.925681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.925701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.059 [2024-12-14 22:49:08.940892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.059 [2024-12-14 22:49:08.940916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.317 [2024-12-14 22:49:08.955841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:08.955860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:08.969342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:08.969361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:08.984242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:08.984261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.000231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.000250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.012392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.012411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.025132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.025151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.039951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.039970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.050885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.050909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.066022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.066040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.080343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.080361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.095809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.095828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.109188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.109206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.123612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.123631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.136833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.136851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.151965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.151985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.162692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.162711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.177607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.177626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.318 [2024-12-14 22:49:09.192308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.318 [2024-12-14 22:49:09.192326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.207611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.207629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.221731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.221749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.236244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.236262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.251360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.251378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.264831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.264849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.279500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.279520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.292887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.292912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.304008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.304027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 16826.00 IOPS, 131.45 MiB/s [2024-12-14T21:49:09.461Z] [2024-12-14 22:49:09.317660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.317679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.332596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.332615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.347578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.347597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.361575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.361594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.376149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.376168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.391594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.391613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.405838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.405856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.420900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.420930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.435546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.435565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.577 [2024-12-14 22:49:09.449269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.577 [2024-12-14 22:49:09.449288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.464060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.464080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.475498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.475518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.489546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.489565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.504109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.504128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.519130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.519150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.533326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.533348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.548024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.548042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.563548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.563567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.577716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.577735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.592568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.592587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.607728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.607748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.621493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.621511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.636240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.636258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.651559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.651578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.663844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.663863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.677410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.677429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.692531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.692556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.836 [2024-12-14 22:49:09.707864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.836 [2024-12-14 22:49:09.707883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.721605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.721624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.736466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.736485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.751737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.751756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.765828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.765846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.780470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.780489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.795568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.795587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.806747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.806766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.821649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.821668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.836282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.836301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.848720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.848739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.864550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.095 [2024-12-14 22:49:09.864569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.095 [2024-12-14 22:49:09.879627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.879647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.894107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.894127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.908485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.908504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.923755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.923775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.936701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.936719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.947696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.947714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.961489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.961512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.096 [2024-12-14 22:49:09.976397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.096 [2024-12-14 22:49:09.976415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:09.991917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:09.991937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.003604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.003624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.017253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.017272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.032209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.032227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.047382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.047400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.061331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.061350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.076482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.076500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.091919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.091938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.104835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.104854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.116269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.116287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.129619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.129638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.144338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.144357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.159738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.159757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.171762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.171780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.185823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.185841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.200593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.200611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.215335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.215355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.355 [2024-12-14 22:49:10.228844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.355 [2024-12-14 22:49:10.228863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.243760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.243779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.256900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.256927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.271768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.271787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.285579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.285597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.300496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.300515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 16797.00 IOPS, 131.23 MiB/s [2024-12-14T21:49:10.498Z] [2024-12-14 22:49:10.315128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.315147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.328627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.328646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.343604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.343623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.355287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.355306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.369517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.369535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.384224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.384242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.400197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.400215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.413952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.413971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.429210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.429229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.443789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.443808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.455328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.455346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.469299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.469317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.484021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.484039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.614 [2024-12-14 22:49:10.496766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.614 [2024-12-14 22:49:10.496785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.511723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.511741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.524624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.524643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.539414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.539432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.552659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.552677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.567200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.567218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.581472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.581491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.595744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.595762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.608618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.608636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.623137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.623156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.636575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.636593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.873 [2024-12-14 22:49:10.651253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.873 [2024-12-14 22:49:10.651272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.665555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.665574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.680115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.680133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.695744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.695762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.707043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.707062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.721874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.721892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.736256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.736275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.874 [2024-12-14 22:49:10.751884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.874 [2024-12-14 22:49:10.751909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.765738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.765758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.780503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.780522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.796060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.796079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.811485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.811504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.825767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.825786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.840315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.840334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.853024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.853043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.864134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.864152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.879886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.879911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.891302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.891324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.905435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.905455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.919866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.919886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.933779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.933799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.948375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.948394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.964184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.964203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.979664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.979683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:10.993478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:10.993498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.133 [2024-12-14 22:49:11.007970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.133 [2024-12-14 22:49:11.007989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.391 [2024-12-14 22:49:11.018551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.391 [2024-12-14 22:49:11.018575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.391 [2024-12-14 22:49:11.033005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.391 [2024-12-14 22:49:11.033024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.391 [2024-12-14 22:49:11.047718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.391 [2024-12-14 22:49:11.047737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.391 [2024-12-14 22:49:11.060504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.391 [2024-12-14 22:49:11.060523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.391 [2024-12-14 22:49:11.073682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.391 [2024-12-14 22:49:11.073701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.088491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.088509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.103500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.103519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.117613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.117632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.131935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.131956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.142109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.142128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.156486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.156506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.171892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.171922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.184173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.184191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.199317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.199336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.212341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.212360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.225639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.225657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.240505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.240523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.255375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.255393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.392 [2024-12-14 22:49:11.269728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.392 [2024-12-14 22:49:11.269747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.284754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.284778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.299629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.299648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.313635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.313653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 16819.33 IOPS, 131.40 MiB/s [2024-12-14T21:49:11.535Z] [2024-12-14 22:49:11.328410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.328428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.343487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.343506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.357029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.357048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.371832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.371852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.384782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.384800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.399436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.399460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.412953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.412972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.423851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.423869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.439848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.439867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.453418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.453437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.468136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.468154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.483325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.483344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.497868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.497888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.512740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.512758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.651 [2024-12-14 22:49:11.527619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.651 [2024-12-14 22:49:11.527638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.540333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.540351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.553150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.553174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.567777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.567796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.579004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.579022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.592843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.592861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.605245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.605264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.619838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.619857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.632475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.632492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.647713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.647731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.658427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.658445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.673483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.673502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.687828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.687846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.699039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.699058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.713431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.713449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.728155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.728173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.744187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.744205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.758993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.759012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.773957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.773976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.910 [2024-12-14 22:49:11.788403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.910 [2024-12-14 22:49:11.788422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.803684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.803703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.815325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.815344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.829568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.829587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.844588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.844607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.859999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.860017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.872826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.872845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.885447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.885465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.900184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.900201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.915572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.915592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.928512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.928531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.943505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.943523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.954455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.954473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.969135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.969153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.983959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.983978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:11.997532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:11.997550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:12.012771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:12.012790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:12.027310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:12.027329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.169 [2024-12-14 22:49:12.039842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.169 [2024-12-14 22:49:12.039862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.053367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.053387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.068315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.068333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.084133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.084152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.099182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.099202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.112828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.112846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.123946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.123965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.137146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.137165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.151912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.151948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.163278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.163298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.177254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.177273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.192070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.192100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.207310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.207331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.221312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.221331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.235667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.235686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.248667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.248686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.261342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.261360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.276165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.276183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.291322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.291343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.429 [2024-12-14 22:49:12.306210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.429 [2024-12-14 22:49:12.306240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 16809.00 IOPS, 131.32 MiB/s [2024-12-14T21:49:12.572Z] [2024-12-14 22:49:12.321230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.321249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.335651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.335675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.348886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.348911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.359987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.360006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.373347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.373366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.388348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.388368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.403513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.403533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.417978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.417998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.432701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.432720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.448114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.448134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.461607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.461626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.476673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.476692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.491326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.491346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.505731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.505750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.520560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.520579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.536528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.536547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.551474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.551493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.688 [2024-12-14 22:49:12.564181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.688 [2024-12-14 22:49:12.564199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.577373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.577393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.593225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.593244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.608685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.608708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.624118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.624136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.639318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.639337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.653505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.653523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.668108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.668126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.683416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.683435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.697605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.697624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.712309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.959 [2024-12-14 22:49:12.712328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.959 [2024-12-14 22:49:12.727584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.727602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.741779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.741797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.756778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.756796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.771680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.771699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.784631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.784649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.800020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.800038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.815991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.816010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.960 [2024-12-14 22:49:12.829198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.960 [2024-12-14 22:49:12.829216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.844445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.844464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.859888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.859925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.872748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.872766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.885251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.885274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.899960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.899978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.910369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.910387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.924944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.924963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.940054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.940073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.952840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.952859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.967735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.967753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.981343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.981362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:12.995927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:12.995945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.008422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.008440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.023632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.023651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.036825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.036843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.049188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.049205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.063761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.063780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.076378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.076396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.088941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.088960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.101114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.101132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.247 [2024-12-14 22:49:13.115406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.247 [2024-12-14 22:49:13.115425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.128909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.552 [2024-12-14 22:49:13.128928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.143921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.552 [2024-12-14 22:49:13.143948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.154389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.552 [2024-12-14 22:49:13.154408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.169130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.552 [2024-12-14 22:49:13.169149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.183555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.552 [2024-12-14 22:49:13.183573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.552 [2024-12-14 22:49:13.196968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.196986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.211596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.211614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.225203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.225230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.239772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.239790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.251074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.251093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.265471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.265489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.279770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.279788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.291885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.291911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.305884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.305908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.320425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.320443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 16814.40 IOPS, 131.36 MiB/s 00:39:52.553 Latency(us) 00:39:52.553 [2024-12-14T21:49:13.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.553 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:52.553 Nvme1n1 : 5.01 16817.37 131.39 0.00 0.00 7604.45 1966.08 12857.54 00:39:52.553 [2024-12-14T21:49:13.437Z] =================================================================================================================== 00:39:52.553 [2024-12-14T21:49:13.437Z] Total : 16817.37 131.39 0.00 0.00 7604.45 1966.08 12857.54 00:39:52.553 [2024-12-14 22:49:13.331731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.331748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.343727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.343742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.355742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.355761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.367732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.367750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.379734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.379749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.391729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.391744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.403728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.403742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.553 [2024-12-14 22:49:13.415727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.553 [2024-12-14 22:49:13.415740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 [2024-12-14 22:49:13.427733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.842 [2024-12-14 22:49:13.427749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 [2024-12-14 22:49:13.439729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.842 [2024-12-14 22:49:13.439746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 [2024-12-14 22:49:13.451727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.842 [2024-12-14 22:49:13.451739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 [2024-12-14 22:49:13.463726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.842 [2024-12-14 22:49:13.463737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 [2024-12-14 22:49:13.475724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.842 [2024-12-14 22:49:13.475733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (598832) - No such process 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 598832 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:52.842 delay0 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.842 22:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:52.842 [2024-12-14 22:49:13.624617] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:00.963 Initializing NVMe Controllers 00:40:00.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:00.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:00.963 Initialization complete. Launching workers. 00:40:00.963 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6799 00:40:00.963 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7076, failed to submit 43 00:40:00.963 success 6929, unsuccessful 147, failed 0 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.963 rmmod nvme_tcp 00:40:00.963 rmmod nvme_fabrics 00:40:00.963 rmmod nvme_keyring 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 597169 ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 597169 ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597169' 00:40:00.963 killing process with pid 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 597169 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.963 22:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.343 00:40:02.343 real 0m31.864s 00:40:02.343 user 0m41.284s 00:40:02.343 sys 0m12.521s 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:02.343 ************************************ 00:40:02.343 END TEST nvmf_zcopy 00:40:02.343 ************************************ 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.343 22:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:02.343 ************************************ 00:40:02.343 START TEST nvmf_nmic 00:40:02.343 ************************************ 00:40:02.343 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:02.343 * Looking for test storage... 00:40:02.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:02.343 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.344 --rc genhtml_branch_coverage=1 00:40:02.344 --rc genhtml_function_coverage=1 00:40:02.344 --rc genhtml_legend=1 00:40:02.344 --rc geninfo_all_blocks=1 00:40:02.344 --rc geninfo_unexecuted_blocks=1 00:40:02.344 00:40:02.344 ' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.344 --rc genhtml_branch_coverage=1 00:40:02.344 --rc genhtml_function_coverage=1 00:40:02.344 --rc genhtml_legend=1 00:40:02.344 --rc geninfo_all_blocks=1 00:40:02.344 --rc geninfo_unexecuted_blocks=1 00:40:02.344 00:40:02.344 ' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.344 --rc genhtml_branch_coverage=1 00:40:02.344 --rc genhtml_function_coverage=1 00:40:02.344 --rc genhtml_legend=1 00:40:02.344 --rc geninfo_all_blocks=1 00:40:02.344 --rc geninfo_unexecuted_blocks=1 00:40:02.344 00:40:02.344 ' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.344 --rc genhtml_branch_coverage=1 00:40:02.344 --rc genhtml_function_coverage=1 00:40:02.344 --rc genhtml_legend=1 00:40:02.344 --rc geninfo_all_blocks=1 00:40:02.344 --rc geninfo_unexecuted_blocks=1 00:40:02.344 00:40:02.344 ' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:02.344 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.345 22:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:08.917 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:08.918 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:08.918 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:08.918 Found net devices under 0000:af:00.0: cvl_0_0 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:08.918 Found net devices under 0000:af:00.1: cvl_0_1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:08.918 22:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:08.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:40:08.918 00:40:08.918 --- 10.0.0.2 ping statistics --- 00:40:08.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.918 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:40:08.918 00:40:08.918 --- 10.0.0.1 ping statistics --- 00:40:08.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.918 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:08.918 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=604227 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 604227 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 604227 ']' 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 [2024-12-14 22:49:29.135258] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:08.919 [2024-12-14 22:49:29.136214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:08.919 [2024-12-14 22:49:29.136252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.919 [2024-12-14 22:49:29.214994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:08.919 [2024-12-14 22:49:29.239964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.919 [2024-12-14 22:49:29.240003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.919 [2024-12-14 22:49:29.240011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:08.919 [2024-12-14 22:49:29.240016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:08.919 [2024-12-14 22:49:29.240022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.919 [2024-12-14 22:49:29.241402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.919 [2024-12-14 22:49:29.241516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:08.919 [2024-12-14 22:49:29.241620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.919 [2024-12-14 22:49:29.241622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:08.919 [2024-12-14 22:49:29.305570] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:08.919 [2024-12-14 22:49:29.306663] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:08.919 [2024-12-14 22:49:29.306799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:08.919 [2024-12-14 22:49:29.307092] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:08.919 [2024-12-14 22:49:29.307167] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 [2024-12-14 22:49:29.370302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 Malloc0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 [2024-12-14 22:49:29.450570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:08.919 test case1: single bdev can't be used in multiple subsystems 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 [2024-12-14 22:49:29.477985] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:08.919 [2024-12-14 22:49:29.478007] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:08.919 [2024-12-14 22:49:29.478015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.919 request: 00:40:08.919 { 00:40:08.919 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:08.919 "namespace": { 00:40:08.919 "bdev_name": "Malloc0", 00:40:08.919 "no_auto_visible": false, 00:40:08.919 "hide_metadata": false 00:40:08.919 }, 00:40:08.919 "method": "nvmf_subsystem_add_ns", 00:40:08.919 "req_id": 1 00:40:08.919 } 00:40:08.919 Got JSON-RPC error response 00:40:08.919 response: 00:40:08.919 { 00:40:08.919 "code": -32602, 00:40:08.919 "message": "Invalid parameters" 00:40:08.919 } 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:08.919 Adding namespace failed - expected result. 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:08.919 test case2: host connect to nvmf target in multiple paths 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:08.919 [2024-12-14 22:49:29.490087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:08.919 22:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:09.488 22:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:09.488 22:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:09.488 22:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:09.488 22:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:09.488 22:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:11.392 22:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:11.392 [global] 00:40:11.392 thread=1 00:40:11.392 invalidate=1 00:40:11.392 rw=write 00:40:11.392 time_based=1 00:40:11.392 runtime=1 00:40:11.392 ioengine=libaio 00:40:11.392 direct=1 00:40:11.392 bs=4096 00:40:11.392 iodepth=1 00:40:11.392 norandommap=0 00:40:11.392 numjobs=1 00:40:11.392 00:40:11.392 verify_dump=1 00:40:11.392 verify_backlog=512 00:40:11.392 verify_state_save=0 00:40:11.392 do_verify=1 00:40:11.392 verify=crc32c-intel 00:40:11.392 [job0] 00:40:11.392 filename=/dev/nvme0n1 00:40:11.392 Could not set queue depth (nvme0n1) 00:40:11.651 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:11.651 fio-3.35 00:40:11.651 Starting 1 thread 00:40:13.030 00:40:13.030 job0: (groupid=0, jobs=1): err= 0: pid=604831: Sat Dec 14 22:49:33 2024 00:40:13.030 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:40:13.030 slat (nsec): min=9649, max=23946, avg=21645.00, stdev=2825.57 00:40:13.030 clat (usec): min=40855, max=41389, avg=40989.27, stdev=106.39 00:40:13.030 lat (usec): min=40879, max=41399, avg=41010.92, stdev=103.83 00:40:13.030 clat percentiles (usec): 00:40:13.030 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:13.030 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:13.030 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:13.030 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:13.030 | 99.99th=[41157] 00:40:13.030 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:40:13.030 slat (usec): min=10, max=40672, avg=144.24, stdev=2159.57 00:40:13.030 clat (usec): min=129, max=268, avg=140.41, stdev= 8.38 00:40:13.030 lat (usec): min=140, max=40893, avg=284.65, stdev=2165.69 00:40:13.030 clat percentiles (usec): 00:40:13.030 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:40:13.030 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 141], 00:40:13.030 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 147], 00:40:13.030 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 269], 99.95th=[ 269], 00:40:13.030 | 99.99th=[ 269] 00:40:13.030 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:13.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:13.030 lat (usec) : 250=95.87%, 500=0.19% 00:40:13.030 lat (msec) : 50=3.94% 00:40:13.030 cpu : usr=0.60%, sys=0.69%, ctx=536, majf=0, minf=1 00:40:13.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.030 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:13.030 00:40:13.030 Run status group 0 (all jobs): 00:40:13.030 READ: bw=83.2KiB/s (85.2kB/s), 83.2KiB/s-83.2KiB/s (85.2kB/s-85.2kB/s), io=84.0KiB (86.0kB), run=1009-1009msec 00:40:13.030 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:40:13.030 00:40:13.030 Disk stats (read/write): 00:40:13.030 nvme0n1: ios=43/512, merge=0/0, ticks=1723/62, in_queue=1785, util=99.70% 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:13.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.030 rmmod nvme_tcp 00:40:13.030 rmmod nvme_fabrics 00:40:13.030 rmmod nvme_keyring 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 604227 ']' 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 604227 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 604227 ']' 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 604227 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604227 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604227' 00:40:13.030 killing process with pid 604227 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 604227 00:40:13.030 22:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 604227 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.290 22:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.825 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.825 00:40:15.825 real 0m13.117s 00:40:15.825 user 0m24.378s 00:40:15.825 sys 0m6.022s 00:40:15.825 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.825 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:15.825 ************************************ 00:40:15.825 END TEST nvmf_nmic 00:40:15.825 ************************************ 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.826 ************************************ 00:40:15.826 START TEST nvmf_fio_target 00:40:15.826 ************************************ 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:15.826 * Looking for test storage... 00:40:15.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:15.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.826 --rc genhtml_branch_coverage=1 00:40:15.826 --rc genhtml_function_coverage=1 00:40:15.826 --rc genhtml_legend=1 00:40:15.826 --rc geninfo_all_blocks=1 00:40:15.826 --rc geninfo_unexecuted_blocks=1 00:40:15.826 00:40:15.826 ' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:15.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.826 --rc genhtml_branch_coverage=1 00:40:15.826 --rc genhtml_function_coverage=1 00:40:15.826 --rc genhtml_legend=1 00:40:15.826 --rc geninfo_all_blocks=1 00:40:15.826 --rc geninfo_unexecuted_blocks=1 00:40:15.826 00:40:15.826 ' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:15.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.826 --rc genhtml_branch_coverage=1 00:40:15.826 --rc genhtml_function_coverage=1 00:40:15.826 --rc genhtml_legend=1 00:40:15.826 --rc geninfo_all_blocks=1 00:40:15.826 --rc geninfo_unexecuted_blocks=1 00:40:15.826 00:40:15.826 ' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:15.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.826 --rc genhtml_branch_coverage=1 00:40:15.826 --rc genhtml_function_coverage=1 00:40:15.826 --rc genhtml_legend=1 00:40:15.826 --rc geninfo_all_blocks=1 00:40:15.826 --rc geninfo_unexecuted_blocks=1 00:40:15.826 00:40:15.826 ' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:15.826 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:15.827 22:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:22.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:22.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:22.397 Found net devices under 0000:af:00.0: cvl_0_0 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.397 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:22.398 Found net devices under 0000:af:00.1: cvl_0_1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:22.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:40:22.398 00:40:22.398 --- 10.0.0.2 ping statistics --- 00:40:22.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.398 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:22.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:40:22.398 00:40:22.398 --- 10.0.0.1 ping statistics --- 00:40:22.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.398 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=608522 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 608522 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 608522 ']' 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:22.398 [2024-12-14 22:49:42.368979] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.398 [2024-12-14 22:49:42.369915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:22.398 [2024-12-14 22:49:42.369950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.398 [2024-12-14 22:49:42.449243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:22.398 [2024-12-14 22:49:42.472371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.398 [2024-12-14 22:49:42.472408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.398 [2024-12-14 22:49:42.472418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.398 [2024-12-14 22:49:42.472424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.398 [2024-12-14 22:49:42.472429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.398 [2024-12-14 22:49:42.473833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.398 [2024-12-14 22:49:42.473944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.398 [2024-12-14 22:49:42.473858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:22.398 [2024-12-14 22:49:42.473945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:22.398 [2024-12-14 22:49:42.536755] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:22.398 [2024-12-14 22:49:42.536947] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:22.398 [2024-12-14 22:49:42.537656] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:22.398 [2024-12-14 22:49:42.537729] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:22.398 [2024-12-14 22:49:42.537866] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:22.398 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:22.398 [2024-12-14 22:49:42.778738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.399 22:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.399 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:22.399 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.399 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:22.399 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.657 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:22.657 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.916 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:22.916 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:23.175 22:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.434 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:23.434 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.434 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:23.434 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.693 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:23.693 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:23.952 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:23.952 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:23.952 22:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:24.212 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:24.212 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:24.470 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:24.727 [2024-12-14 22:49:45.362665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.727 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:24.727 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:24.986 22:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:25.244 22:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:27.149 22:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:27.408 [global] 00:40:27.408 thread=1 00:40:27.408 invalidate=1 00:40:27.408 rw=write 00:40:27.408 time_based=1 00:40:27.408 runtime=1 00:40:27.408 ioengine=libaio 00:40:27.408 direct=1 00:40:27.408 bs=4096 00:40:27.408 iodepth=1 00:40:27.408 norandommap=0 00:40:27.408 numjobs=1 00:40:27.408 00:40:27.408 verify_dump=1 00:40:27.408 verify_backlog=512 00:40:27.408 verify_state_save=0 00:40:27.408 do_verify=1 00:40:27.408 verify=crc32c-intel 00:40:27.408 [job0] 00:40:27.408 filename=/dev/nvme0n1 00:40:27.408 [job1] 00:40:27.408 filename=/dev/nvme0n2 00:40:27.408 [job2] 00:40:27.408 filename=/dev/nvme0n3 00:40:27.408 [job3] 00:40:27.408 filename=/dev/nvme0n4 00:40:27.408 Could not set queue depth (nvme0n1) 00:40:27.408 Could not set queue depth (nvme0n2) 00:40:27.408 Could not set queue depth (nvme0n3) 00:40:27.408 Could not set queue depth (nvme0n4) 00:40:27.667 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:27.667 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:27.667 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:27.667 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:27.667 fio-3.35 00:40:27.667 Starting 4 threads 00:40:29.055 00:40:29.056 job0: (groupid=0, jobs=1): err= 0: pid=609610: Sat Dec 14 22:49:49 2024 00:40:29.056 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:29.056 slat (nsec): min=7074, max=43865, avg=8255.77, stdev=1511.78 00:40:29.056 clat (usec): min=207, max=404, avg=252.21, stdev=14.98 00:40:29.056 lat (usec): min=215, max=412, avg=260.46, stdev=15.03 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:40:29.056 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:40:29.056 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:40:29.056 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 306], 99.95th=[ 310], 00:40:29.056 | 99.99th=[ 404] 00:40:29.056 write: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec); 0 zone resets 00:40:29.056 slat (nsec): min=10294, max=38177, avg=11860.73, stdev=1687.11 00:40:29.056 clat (usec): min=150, max=330, avg=185.08, stdev=20.53 00:40:29.056 lat (usec): min=162, max=344, avg=196.94, stdev=20.65 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:40:29.056 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:40:29.056 | 70.00th=[ 190], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 221], 00:40:29.056 | 99.00th=[ 235], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 310], 00:40:29.056 | 99.99th=[ 330] 00:40:29.056 bw ( KiB/s): min= 9208, max= 9208, per=24.01%, avg=9208.00, stdev= 0.00, samples=1 00:40:29.056 iops : min= 2302, max= 2302, avg=2302.00, stdev= 0.00, samples=1 00:40:29.056 lat (usec) : 250=75.32%, 500=24.68% 00:40:29.056 cpu : usr=2.70%, sys=7.90%, ctx=4354, majf=0, minf=1 00:40:29.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 issued rwts: total=2048,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.056 job1: (groupid=0, jobs=1): err= 0: pid=609617: Sat Dec 14 22:49:49 2024 00:40:29.056 read: IOPS=2198, BW=8795KiB/s (9006kB/s)(8804KiB/1001msec) 00:40:29.056 slat (nsec): min=6277, max=20671, avg=7048.72, stdev=647.76 00:40:29.056 clat (usec): min=188, max=445, avg=235.44, stdev=17.06 00:40:29.056 lat (usec): min=195, max=451, avg=242.48, stdev=17.07 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:40:29.056 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 243], 00:40:29.056 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:40:29.056 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 297], 00:40:29.056 | 99.99th=[ 445] 00:40:29.056 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:29.056 slat (nsec): min=9031, max=39081, avg=10058.51, stdev=1117.43 00:40:29.056 clat (usec): min=134, max=3871, avg=167.99, stdev=74.63 00:40:29.056 lat (usec): min=145, max=3880, avg=178.04, stdev=74.65 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:40:29.056 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:40:29.056 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:40:29.056 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 247], 99.95th=[ 293], 00:40:29.056 | 99.99th=[ 3884] 00:40:29.056 bw ( KiB/s): min=11544, max=11544, per=30.10%, avg=11544.00, stdev= 0.00, samples=1 00:40:29.056 iops : min= 2886, max= 2886, avg=2886.00, stdev= 0.00, samples=1 00:40:29.056 lat (usec) : 250=90.53%, 500=9.45% 00:40:29.056 lat (msec) : 4=0.02% 00:40:29.056 cpu : usr=2.40%, sys=4.30%, ctx=4762, majf=0, minf=2 00:40:29.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 issued rwts: total=2201,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.056 job2: (groupid=0, jobs=1): err= 0: pid=609624: Sat Dec 14 22:49:49 2024 00:40:29.056 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:29.056 slat (nsec): min=7650, max=23087, avg=8988.61, stdev=1187.69 00:40:29.056 clat (usec): min=212, max=489, avg=250.08, stdev=16.72 00:40:29.056 lat (usec): min=220, max=500, avg=259.07, stdev=16.73 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:40:29.056 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:40:29.056 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:40:29.056 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 445], 99.95th=[ 474], 00:40:29.056 | 99.99th=[ 490] 00:40:29.056 write: IOPS=2387, BW=9550KiB/s (9780kB/s)(9560KiB/1001msec); 0 zone resets 00:40:29.056 slat (nsec): min=11037, max=44037, avg=12632.94, stdev=1866.94 00:40:29.056 clat (usec): min=149, max=252, avg=177.90, stdev=12.27 00:40:29.056 lat (usec): min=162, max=273, avg=190.53, stdev=12.59 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:40:29.056 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:40:29.056 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:40:29.056 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 233], 99.95th=[ 235], 00:40:29.056 | 99.99th=[ 253] 00:40:29.056 bw ( KiB/s): min= 9472, max= 9472, per=24.70%, avg=9472.00, stdev= 0.00, samples=1 00:40:29.056 iops : min= 2368, max= 2368, avg=2368.00, stdev= 0.00, samples=1 00:40:29.056 lat (usec) : 250=80.71%, 500=19.29% 00:40:29.056 cpu : usr=3.90%, sys=7.40%, ctx=4440, majf=0, minf=1 00:40:29.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 issued rwts: total=2048,2390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.056 job3: (groupid=0, jobs=1): err= 0: pid=609627: Sat Dec 14 22:49:49 2024 00:40:29.056 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:29.056 slat (nsec): min=8136, max=23181, avg=9249.23, stdev=1352.96 00:40:29.056 clat (usec): min=208, max=473, avg=248.07, stdev=16.80 00:40:29.056 lat (usec): min=217, max=482, avg=257.32, stdev=16.84 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:40:29.056 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:40:29.056 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:40:29.056 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 461], 99.95th=[ 461], 00:40:29.056 | 99.99th=[ 474] 00:40:29.056 write: IOPS=2341, BW=9367KiB/s (9591kB/s)(9376KiB/1001msec); 0 zone resets 00:40:29.056 slat (nsec): min=11147, max=68335, avg=12500.46, stdev=1975.00 00:40:29.056 clat (usec): min=147, max=293, avg=183.32, stdev=18.13 00:40:29.056 lat (usec): min=159, max=319, avg=195.82, stdev=18.35 00:40:29.056 clat percentiles (usec): 00:40:29.056 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:40:29.056 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:40:29.056 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 217], 00:40:29.056 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 281], 00:40:29.056 | 99.99th=[ 293] 00:40:29.056 bw ( KiB/s): min= 9080, max= 9080, per=23.68%, avg=9080.00, stdev= 0.00, samples=1 00:40:29.056 iops : min= 2270, max= 2270, avg=2270.00, stdev= 0.00, samples=1 00:40:29.056 lat (usec) : 250=83.15%, 500=16.85% 00:40:29.056 cpu : usr=5.10%, sys=6.40%, ctx=4393, majf=0, minf=2 00:40:29.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.056 issued rwts: total=2048,2344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.056 00:40:29.056 Run status group 0 (all jobs): 00:40:29.056 READ: bw=32.6MiB/s (34.1MB/s), 8184KiB/s-8795KiB/s (8380kB/s-9006kB/s), io=32.6MiB (34.2MB), run=1001-1001msec 00:40:29.056 WRITE: bw=37.5MiB/s (39.3MB/s), 9203KiB/s-9.99MiB/s (9424kB/s-10.5MB/s), io=37.5MiB (39.3MB), run=1001-1001msec 00:40:29.056 00:40:29.056 Disk stats (read/write): 00:40:29.056 nvme0n1: ios=1687/2048, merge=0/0, ticks=1258/362, in_queue=1620, util=85.67% 00:40:29.056 nvme0n2: ios=2027/2048, merge=0/0, ticks=517/343, in_queue=860, util=90.84% 00:40:29.056 nvme0n3: ios=1759/2048, merge=0/0, ticks=1323/343, in_queue=1666, util=93.33% 00:40:29.056 nvme0n4: ios=1757/2048, merge=0/0, ticks=469/356, in_queue=825, util=95.59% 00:40:29.056 22:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:29.056 [global] 00:40:29.056 thread=1 00:40:29.056 invalidate=1 00:40:29.056 rw=randwrite 00:40:29.056 time_based=1 00:40:29.056 runtime=1 00:40:29.056 ioengine=libaio 00:40:29.056 direct=1 00:40:29.056 bs=4096 00:40:29.056 iodepth=1 00:40:29.056 norandommap=0 00:40:29.056 numjobs=1 00:40:29.056 00:40:29.056 verify_dump=1 00:40:29.056 verify_backlog=512 00:40:29.056 verify_state_save=0 00:40:29.056 do_verify=1 00:40:29.056 verify=crc32c-intel 00:40:29.056 [job0] 00:40:29.056 filename=/dev/nvme0n1 00:40:29.056 [job1] 00:40:29.056 filename=/dev/nvme0n2 00:40:29.056 [job2] 00:40:29.056 filename=/dev/nvme0n3 00:40:29.056 [job3] 00:40:29.056 filename=/dev/nvme0n4 00:40:29.056 Could not set queue depth (nvme0n1) 00:40:29.056 Could not set queue depth (nvme0n2) 00:40:29.056 Could not set queue depth (nvme0n3) 00:40:29.056 Could not set queue depth (nvme0n4) 00:40:29.315 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:29.315 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:29.315 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:29.315 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:29.315 fio-3.35 00:40:29.315 Starting 4 threads 00:40:30.693 00:40:30.693 job0: (groupid=0, jobs=1): err= 0: pid=610009: Sat Dec 14 22:49:51 2024 00:40:30.693 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:30.693 slat (nsec): min=7824, max=42066, avg=9157.87, stdev=1668.09 00:40:30.693 clat (usec): min=207, max=481, avg=241.32, stdev=14.81 00:40:30.693 lat (usec): min=217, max=490, avg=250.47, stdev=14.83 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 231], 00:40:30.693 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:40:30.693 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 262], 00:40:30.693 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 383], 99.95th=[ 416], 00:40:30.693 | 99.99th=[ 482] 00:40:30.693 write: IOPS=2422, BW=9690KiB/s (9923kB/s)(9700KiB/1001msec); 0 zone resets 00:40:30.693 slat (nsec): min=10327, max=48957, avg=11967.32, stdev=1699.52 00:40:30.693 clat (usec): min=128, max=379, avg=183.15, stdev=38.93 00:40:30.693 lat (usec): min=139, max=392, avg=195.12, stdev=39.19 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 155], 00:40:30.693 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 174], 00:40:30.693 | 70.00th=[ 196], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 262], 00:40:30.693 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 359], 00:40:30.693 | 99.99th=[ 379] 00:40:30.693 bw ( KiB/s): min= 8928, max= 8928, per=38.17%, avg=8928.00, stdev= 0.00, samples=1 00:40:30.693 iops : min= 2232, max= 2232, avg=2232.00, stdev= 0.00, samples=1 00:40:30.693 lat (usec) : 250=87.26%, 500=12.74% 00:40:30.693 cpu : usr=3.50%, sys=8.10%, ctx=4473, majf=0, minf=1 00:40:30.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 issued rwts: total=2048,2425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.693 job1: (groupid=0, jobs=1): err= 0: pid=610024: Sat Dec 14 22:49:51 2024 00:40:30.693 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:30.693 slat (nsec): min=6539, max=21489, avg=7369.20, stdev=854.65 00:40:30.693 clat (usec): min=215, max=417, avg=240.77, stdev=10.13 00:40:30.693 lat (usec): min=222, max=424, avg=248.14, stdev=10.18 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 233], 00:40:30.693 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:40:30.693 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:40:30.693 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 277], 99.95th=[ 281], 00:40:30.693 | 99.99th=[ 416] 00:40:30.693 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(9.99MiB/1001msec); 0 zone resets 00:40:30.693 slat (nsec): min=9232, max=38580, avg=10298.45, stdev=1262.13 00:40:30.693 clat (usec): min=142, max=324, avg=178.02, stdev=30.27 00:40:30.693 lat (usec): min=152, max=362, avg=188.32, stdev=30.35 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:40:30.693 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:40:30.693 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 208], 95.00th=[ 269], 00:40:30.693 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 302], 99.95th=[ 306], 00:40:30.693 | 99.99th=[ 326] 00:40:30.693 bw ( KiB/s): min=10120, max=10120, per=43.26%, avg=10120.00, stdev= 0.00, samples=1 00:40:30.693 iops : min= 2530, max= 2530, avg=2530.00, stdev= 0.00, samples=1 00:40:30.693 lat (usec) : 250=89.25%, 500=10.75% 00:40:30.693 cpu : usr=1.80%, sys=4.80%, ctx=4606, majf=0, minf=1 00:40:30.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 issued rwts: total=2048,2557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.693 job2: (groupid=0, jobs=1): err= 0: pid=610040: Sat Dec 14 22:49:51 2024 00:40:30.693 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:40:30.693 slat (nsec): min=10211, max=25262, avg=22126.36, stdev=2767.54 00:40:30.693 clat (usec): min=40681, max=41932, avg=41003.14, stdev=223.77 00:40:30.693 lat (usec): min=40691, max=41954, avg=41025.27, stdev=224.68 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:30.693 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:30.693 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:30.693 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:30.693 | 99.99th=[41681] 00:40:30.693 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:40:30.693 slat (nsec): min=9936, max=62738, avg=11433.80, stdev=2804.64 00:40:30.693 clat (usec): min=150, max=340, avg=222.34, stdev=30.37 00:40:30.693 lat (usec): min=161, max=391, avg=233.77, stdev=30.92 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 202], 00:40:30.693 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:40:30.693 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 273], 00:40:30.693 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 343], 99.95th=[ 343], 00:40:30.693 | 99.99th=[ 343] 00:40:30.693 bw ( KiB/s): min= 4096, max= 4096, per=17.51%, avg=4096.00, stdev= 0.00, samples=1 00:40:30.693 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:30.693 lat (usec) : 250=81.09%, 500=14.79% 00:40:30.693 lat (msec) : 50=4.12% 00:40:30.693 cpu : usr=0.10%, sys=1.17%, ctx=534, majf=0, minf=1 00:40:30.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.693 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.693 job3: (groupid=0, jobs=1): err= 0: pid=610045: Sat Dec 14 22:49:51 2024 00:40:30.693 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:40:30.693 slat (nsec): min=4824, max=26941, avg=17986.52, stdev=7451.33 00:40:30.693 clat (usec): min=332, max=41998, avg=39237.62, stdev=8484.56 00:40:30.693 lat (usec): min=359, max=42007, avg=39255.61, stdev=8482.57 00:40:30.693 clat percentiles (usec): 00:40:30.693 | 1.00th=[ 334], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:30.693 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:30.693 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:30.693 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:30.693 | 99.99th=[42206] 00:40:30.693 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:40:30.693 slat (nsec): min=4129, max=16573, avg=5061.87, stdev=1078.34 00:40:30.693 clat (usec): min=140, max=405, avg=235.58, stdev=38.32 00:40:30.693 lat (usec): min=145, max=410, avg=240.64, stdev=38.33 00:40:30.693 clat percentiles (usec): 00:40:30.694 | 1.00th=[ 153], 5.00th=[ 172], 10.00th=[ 184], 20.00th=[ 215], 00:40:30.694 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:40:30.694 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 314], 00:40:30.694 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 404], 99.95th=[ 404], 00:40:30.694 | 99.99th=[ 404] 00:40:30.694 bw ( KiB/s): min= 4096, max= 4096, per=17.51%, avg=4096.00, stdev= 0.00, samples=1 00:40:30.694 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:30.694 lat (usec) : 250=71.40%, 500=24.49% 00:40:30.694 lat (msec) : 50=4.11% 00:40:30.694 cpu : usr=0.19%, sys=0.19%, ctx=537, majf=0, minf=1 00:40:30.694 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.694 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.694 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.694 00:40:30.694 Run status group 0 (all jobs): 00:40:30.694 READ: bw=15.8MiB/s (16.5MB/s), 85.9KiB/s-8184KiB/s (88.0kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1027msec 00:40:30.694 WRITE: bw=22.8MiB/s (24.0MB/s), 1994KiB/s-9.98MiB/s (2042kB/s-10.5MB/s), io=23.5MiB (24.6MB), run=1001-1027msec 00:40:30.694 00:40:30.694 Disk stats (read/write): 00:40:30.694 nvme0n1: ios=1766/2048, merge=0/0, ticks=501/361, in_queue=862, util=94.49% 00:40:30.694 nvme0n2: ios=1854/2048, merge=0/0, ticks=627/361, in_queue=988, util=98.27% 00:40:30.694 nvme0n3: ios=17/512, merge=0/0, ticks=698/112, in_queue=810, util=88.94% 00:40:30.694 nvme0n4: ios=64/512, merge=0/0, ticks=1055/120, in_queue=1175, util=98.11% 00:40:30.694 22:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:30.694 [global] 00:40:30.694 thread=1 00:40:30.694 invalidate=1 00:40:30.694 rw=write 00:40:30.694 time_based=1 00:40:30.694 runtime=1 00:40:30.694 ioengine=libaio 00:40:30.694 direct=1 00:40:30.694 bs=4096 00:40:30.694 iodepth=128 00:40:30.694 norandommap=0 00:40:30.694 numjobs=1 00:40:30.694 00:40:30.694 verify_dump=1 00:40:30.694 verify_backlog=512 00:40:30.694 verify_state_save=0 00:40:30.694 do_verify=1 00:40:30.694 verify=crc32c-intel 00:40:30.694 [job0] 00:40:30.694 filename=/dev/nvme0n1 00:40:30.694 [job1] 00:40:30.694 filename=/dev/nvme0n2 00:40:30.694 [job2] 00:40:30.694 filename=/dev/nvme0n3 00:40:30.694 [job3] 00:40:30.694 filename=/dev/nvme0n4 00:40:30.694 Could not set queue depth (nvme0n1) 00:40:30.694 Could not set queue depth (nvme0n2) 00:40:30.694 Could not set queue depth (nvme0n3) 00:40:30.694 Could not set queue depth (nvme0n4) 00:40:30.694 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.694 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.694 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.694 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.694 fio-3.35 00:40:30.694 Starting 4 threads 00:40:32.071 00:40:32.071 job0: (groupid=0, jobs=1): err= 0: pid=610427: Sat Dec 14 22:49:52 2024 00:40:32.071 read: IOPS=7636, BW=29.8MiB/s (31.3MB/s)(29.9MiB/1003msec) 00:40:32.071 slat (nsec): min=1264, max=14011k, avg=69580.75, stdev=573768.26 00:40:32.071 clat (usec): min=1356, max=28715, avg=8708.45, stdev=2475.00 00:40:32.071 lat (usec): min=3574, max=28722, avg=8778.03, stdev=2524.96 00:40:32.071 clat percentiles (usec): 00:40:32.071 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7177], 00:40:32.071 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:40:32.071 | 70.00th=[ 8717], 80.00th=[10552], 90.00th=[12387], 95.00th=[13698], 00:40:32.071 | 99.00th=[16057], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:40:32.071 | 99.99th=[28705] 00:40:32.071 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:40:32.071 slat (usec): min=2, max=13509, avg=54.90, stdev=371.81 00:40:32.071 clat (usec): min=1464, max=28707, avg=7869.87, stdev=2106.65 00:40:32.071 lat (usec): min=1496, max=28714, avg=7924.77, stdev=2134.54 00:40:32.071 clat percentiles (usec): 00:40:32.071 | 1.00th=[ 3425], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 6456], 00:40:32.071 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8225], 00:40:32.071 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[11338], 00:40:32.071 | 99.00th=[15401], 99.50th=[16188], 99.90th=[16188], 99.95th=[23462], 00:40:32.071 | 99.99th=[28705] 00:40:32.071 bw ( KiB/s): min=30128, max=31312, per=50.56%, avg=30720.00, stdev=837.21, samples=2 00:40:32.071 iops : min= 7532, max= 7828, avg=7680.00, stdev=209.30, samples=2 00:40:32.071 lat (msec) : 2=0.10%, 4=0.95%, 10=82.87%, 20=16.05%, 50=0.04% 00:40:32.071 cpu : usr=6.89%, sys=8.28%, ctx=691, majf=0, minf=1 00:40:32.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:32.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:32.072 issued rwts: total=7659,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:32.072 job1: (groupid=0, jobs=1): err= 0: pid=610440: Sat Dec 14 22:49:52 2024 00:40:32.072 read: IOPS=2622, BW=10.2MiB/s (10.7MB/s)(10.7MiB/1045msec) 00:40:32.072 slat (nsec): min=1100, max=14699k, avg=128520.79, stdev=789937.09 00:40:32.072 clat (usec): min=3247, max=73444, avg=17992.20, stdev=10960.92 00:40:32.072 lat (usec): min=3255, max=73448, avg=18120.72, stdev=11001.92 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 8225], 5.00th=[11731], 10.00th=[11994], 20.00th=[12125], 00:40:32.072 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13698], 60.00th=[14615], 00:40:32.072 | 70.00th=[16450], 80.00th=[19530], 90.00th=[32113], 95.00th=[46924], 00:40:32.072 | 99.00th=[54789], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:40:32.072 | 99.99th=[73925] 00:40:32.072 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:40:32.072 slat (usec): min=2, max=14956, avg=204.54, stdev=1024.16 00:40:32.072 clat (usec): min=1156, max=87767, avg=27080.58, stdev=16308.69 00:40:32.072 lat (usec): min=1170, max=87774, avg=27285.13, stdev=16410.77 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12256], 00:40:32.072 | 30.00th=[14877], 40.00th=[21103], 50.00th=[24511], 60.00th=[27132], 00:40:32.072 | 70.00th=[31589], 80.00th=[35390], 90.00th=[52167], 95.00th=[58459], 00:40:32.072 | 99.00th=[84411], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:40:32.072 | 99.99th=[87557] 00:40:32.072 bw ( KiB/s): min=12288, max=12288, per=20.23%, avg=12288.00, stdev= 0.00, samples=2 00:40:32.072 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:40:32.072 lat (msec) : 2=0.03%, 4=0.17%, 10=3.99%, 20=54.48%, 50=34.13% 00:40:32.072 lat (msec) : 100=7.19% 00:40:32.072 cpu : usr=2.11%, sys=3.07%, ctx=300, majf=0, minf=2 00:40:32.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:32.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:32.072 issued rwts: total=2741,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:32.072 job2: (groupid=0, jobs=1): err= 0: pid=610455: Sat Dec 14 22:49:52 2024 00:40:32.072 read: IOPS=1996, BW=7984KiB/s (8176kB/s)(8032KiB/1006msec) 00:40:32.072 slat (nsec): min=1138, max=25901k, avg=227364.70, stdev=1483654.72 00:40:32.072 clat (usec): min=1862, max=80076, avg=28435.72, stdev=16846.08 00:40:32.072 lat (usec): min=7595, max=80083, avg=28663.08, stdev=16916.43 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 7767], 5.00th=[10683], 10.00th=[12518], 20.00th=[17433], 00:40:32.072 | 30.00th=[17957], 40.00th=[17957], 50.00th=[19530], 60.00th=[26084], 00:40:32.072 | 70.00th=[32113], 80.00th=[42206], 90.00th=[54264], 95.00th=[58983], 00:40:32.072 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:40:32.072 | 99.99th=[80217] 00:40:32.072 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:40:32.072 slat (usec): min=2, max=9864, avg=261.48, stdev=948.76 00:40:32.072 clat (usec): min=5936, max=87836, avg=34132.58, stdev=18950.46 00:40:32.072 lat (usec): min=5944, max=87843, avg=34394.06, stdev=19042.40 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10552], 00:40:32.072 | 30.00th=[25035], 40.00th=[26608], 50.00th=[33817], 60.00th=[40633], 00:40:32.072 | 70.00th=[45876], 80.00th=[49546], 90.00th=[57410], 95.00th=[63177], 00:40:32.072 | 99.00th=[86508], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:40:32.072 | 99.99th=[87557] 00:40:32.072 bw ( KiB/s): min= 8192, max= 8192, per=13.48%, avg=8192.00, stdev= 0.00, samples=2 00:40:32.072 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:40:32.072 lat (msec) : 2=0.02%, 10=8.04%, 20=29.41%, 50=46.08%, 100=16.44% 00:40:32.072 cpu : usr=1.59%, sys=1.79%, ctx=311, majf=0, minf=1 00:40:32.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:40:32.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:32.072 issued rwts: total=2008,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:32.072 job3: (groupid=0, jobs=1): err= 0: pid=610456: Sat Dec 14 22:49:52 2024 00:40:32.072 read: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1010msec) 00:40:32.072 slat (nsec): min=1505, max=12347k, avg=135118.93, stdev=817077.90 00:40:32.072 clat (usec): min=4554, max=56217, avg=14975.29, stdev=7572.84 00:40:32.072 lat (usec): min=4567, max=56229, avg=15110.41, stdev=7651.09 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 6849], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[10945], 00:40:32.072 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[13698], 00:40:32.072 | 70.00th=[15270], 80.00th=[16909], 90.00th=[21627], 95.00th=[32113], 00:40:32.072 | 99.00th=[47973], 99.50th=[51119], 99.90th=[56361], 99.95th=[56361], 00:40:32.072 | 99.99th=[56361] 00:40:32.072 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:40:32.072 slat (usec): min=2, max=41626, avg=193.89, stdev=1133.73 00:40:32.072 clat (usec): min=3214, max=72775, avg=26210.56, stdev=14781.52 00:40:32.072 lat (usec): min=3225, max=72834, avg=26404.46, stdev=14904.40 00:40:32.072 clat percentiles (usec): 00:40:32.072 | 1.00th=[ 5407], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[10028], 00:40:32.072 | 30.00th=[12649], 40.00th=[21103], 50.00th=[26084], 60.00th=[31327], 00:40:32.072 | 70.00th=[35390], 80.00th=[41157], 90.00th=[48497], 95.00th=[50070], 00:40:32.072 | 99.00th=[52691], 99.50th=[53740], 99.90th=[56361], 99.95th=[72877], 00:40:32.072 | 99.99th=[72877] 00:40:32.072 bw ( KiB/s): min=11200, max=12720, per=19.69%, avg=11960.00, stdev=1074.80, samples=2 00:40:32.072 iops : min= 2800, max= 3180, avg=2990.00, stdev=268.70, samples=2 00:40:32.072 lat (msec) : 4=0.11%, 10=13.53%, 20=47.68%, 50=35.91%, 100=2.78% 00:40:32.072 cpu : usr=1.98%, sys=4.56%, ctx=315, majf=0, minf=1 00:40:32.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:32.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:32.072 issued rwts: total=2606,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:32.072 00:40:32.072 Run status group 0 (all jobs): 00:40:32.072 READ: bw=56.1MiB/s (58.8MB/s), 7984KiB/s-29.8MiB/s (8176kB/s-31.3MB/s), io=58.6MiB (61.5MB), run=1003-1045msec 00:40:32.072 WRITE: bw=59.3MiB/s (62.2MB/s), 8143KiB/s-29.9MiB/s (8339kB/s-31.4MB/s), io=62.0MiB (65.0MB), run=1003-1045msec 00:40:32.072 00:40:32.072 Disk stats (read/write): 00:40:32.072 nvme0n1: ios=6279/6656, merge=0/0, ticks=52614/51091, in_queue=103705, util=97.70% 00:40:32.072 nvme0n2: ios=2600/2615, merge=0/0, ticks=22071/35586, in_queue=57657, util=98.27% 00:40:32.072 nvme0n3: ios=1560/1783, merge=0/0, ticks=12840/14906, in_queue=27746, util=97.60% 00:40:32.072 nvme0n4: ios=2077/2560, merge=0/0, ticks=29476/63968, in_queue=93444, util=100.00% 00:40:32.072 22:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:32.072 [global] 00:40:32.072 thread=1 00:40:32.072 invalidate=1 00:40:32.072 rw=randwrite 00:40:32.072 time_based=1 00:40:32.072 runtime=1 00:40:32.072 ioengine=libaio 00:40:32.072 direct=1 00:40:32.072 bs=4096 00:40:32.072 iodepth=128 00:40:32.072 norandommap=0 00:40:32.072 numjobs=1 00:40:32.072 00:40:32.072 verify_dump=1 00:40:32.072 verify_backlog=512 00:40:32.072 verify_state_save=0 00:40:32.072 do_verify=1 00:40:32.072 verify=crc32c-intel 00:40:32.072 [job0] 00:40:32.072 filename=/dev/nvme0n1 00:40:32.072 [job1] 00:40:32.072 filename=/dev/nvme0n2 00:40:32.072 [job2] 00:40:32.072 filename=/dev/nvme0n3 00:40:32.072 [job3] 00:40:32.072 filename=/dev/nvme0n4 00:40:32.072 Could not set queue depth (nvme0n1) 00:40:32.072 Could not set queue depth (nvme0n2) 00:40:32.072 Could not set queue depth (nvme0n3) 00:40:32.072 Could not set queue depth (nvme0n4) 00:40:32.331 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.331 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.331 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.331 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.331 fio-3.35 00:40:32.331 Starting 4 threads 00:40:33.709 00:40:33.709 job0: (groupid=0, jobs=1): err= 0: pid=610873: Sat Dec 14 22:49:54 2024 00:40:33.709 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:40:33.709 slat (nsec): min=1098, max=12214k, avg=92037.37, stdev=621030.32 00:40:33.709 clat (usec): min=6593, max=30094, avg=12552.80, stdev=3030.25 00:40:33.709 lat (usec): min=6644, max=30117, avg=12644.84, stdev=3052.01 00:40:33.709 clat percentiles (usec): 00:40:33.709 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:40:33.709 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:40:33.709 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15926], 95.00th=[18220], 00:40:33.709 | 99.00th=[23725], 99.50th=[23725], 99.90th=[24249], 99.95th=[25822], 00:40:33.709 | 99.99th=[30016] 00:40:33.709 write: IOPS=5383, BW=21.0MiB/s (22.1MB/s)(21.1MiB/1004msec); 0 zone resets 00:40:33.709 slat (nsec): min=1890, max=9908.9k, avg=91367.14, stdev=611629.16 00:40:33.709 clat (usec): min=385, max=26662, avg=11664.75, stdev=2905.26 00:40:33.709 lat (usec): min=1694, max=26665, avg=11756.11, stdev=2963.73 00:40:33.709 clat percentiles (usec): 00:40:33.709 | 1.00th=[ 4178], 5.00th=[ 7832], 10.00th=[ 9503], 20.00th=[10028], 00:40:33.709 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:40:33.709 | 70.00th=[11994], 80.00th=[12649], 90.00th=[15401], 95.00th=[16909], 00:40:33.709 | 99.00th=[22676], 99.50th=[25297], 99.90th=[26608], 99.95th=[26608], 00:40:33.709 | 99.99th=[26608] 00:40:33.709 bw ( KiB/s): min=20480, max=21736, per=27.11%, avg=21108.00, stdev=888.13, samples=2 00:40:33.709 iops : min= 5120, max= 5434, avg=5277.00, stdev=222.03, samples=2 00:40:33.709 lat (usec) : 500=0.01% 00:40:33.709 lat (msec) : 2=0.26%, 4=0.23%, 10=18.93%, 20=78.35%, 50=2.23% 00:40:33.709 cpu : usr=3.09%, sys=6.98%, ctx=344, majf=0, minf=1 00:40:33.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:33.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.709 issued rwts: total=5120,5405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.709 job1: (groupid=0, jobs=1): err= 0: pid=610886: Sat Dec 14 22:49:54 2024 00:40:33.709 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:33.709 slat (nsec): min=1571, max=43068k, avg=105257.24, stdev=821967.87 00:40:33.709 clat (usec): min=8078, max=53130, avg=13555.65, stdev=6863.41 00:40:33.709 lat (usec): min=8087, max=53135, avg=13660.91, stdev=6882.90 00:40:33.709 clat percentiles (usec): 00:40:33.709 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:40:33.709 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[12780], 00:40:33.709 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[19268], 00:40:33.709 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:40:33.709 | 99.99th=[53216] 00:40:33.709 write: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1003msec); 0 zone resets 00:40:33.710 slat (usec): min=2, max=7547, avg=100.05, stdev=510.80 00:40:33.710 clat (usec): min=429, max=43501, avg=13202.10, stdev=5085.21 00:40:33.710 lat (usec): min=3245, max=43504, avg=13302.15, stdev=5100.07 00:40:33.710 clat percentiles (usec): 00:40:33.710 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10683], 00:40:33.710 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:40:33.710 | 70.00th=[12780], 80.00th=[13173], 90.00th=[16909], 95.00th=[21365], 00:40:33.710 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:33.710 | 99.99th=[43254] 00:40:33.710 bw ( KiB/s): min=18016, max=19976, per=24.40%, avg=18996.00, stdev=1385.93, samples=2 00:40:33.710 iops : min= 4504, max= 4994, avg=4749.00, stdev=346.48, samples=2 00:40:33.710 lat (usec) : 500=0.01% 00:40:33.710 lat (msec) : 4=0.34%, 10=10.03%, 20=84.30%, 50=4.01%, 100=1.32% 00:40:33.710 cpu : usr=3.59%, sys=5.49%, ctx=462, majf=0, minf=1 00:40:33.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:33.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.710 issued rwts: total=4608,4877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.710 job2: (groupid=0, jobs=1): err= 0: pid=610901: Sat Dec 14 22:49:54 2024 00:40:33.710 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:33.710 slat (nsec): min=1222, max=12360k, avg=104678.67, stdev=715121.70 00:40:33.710 clat (usec): min=5600, max=31765, avg=14333.97, stdev=4227.15 00:40:33.710 lat (usec): min=5604, max=31779, avg=14438.65, stdev=4257.56 00:40:33.710 clat percentiles (usec): 00:40:33.710 | 1.00th=[ 6587], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11863], 00:40:33.710 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:40:33.710 | 70.00th=[14746], 80.00th=[16909], 90.00th=[19006], 95.00th=[22414], 00:40:33.710 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:40:33.710 | 99.99th=[31851] 00:40:33.710 write: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:40:33.710 slat (usec): min=2, max=11078, avg=100.46, stdev=646.09 00:40:33.710 clat (usec): min=1937, max=41979, avg=13211.67, stdev=3233.12 00:40:33.710 lat (usec): min=5327, max=42397, avg=13312.14, stdev=3287.62 00:40:33.710 clat percentiles (usec): 00:40:33.710 | 1.00th=[ 5473], 5.00th=[ 8356], 10.00th=[10421], 20.00th=[11600], 00:40:33.710 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:40:33.710 | 70.00th=[13566], 80.00th=[13829], 90.00th=[16712], 95.00th=[19268], 00:40:33.710 | 99.00th=[20841], 99.50th=[29754], 99.90th=[42206], 99.95th=[42206], 00:40:33.710 | 99.99th=[42206] 00:40:33.710 bw ( KiB/s): min=18032, max=18832, per=23.68%, avg=18432.00, stdev=565.69, samples=2 00:40:33.710 iops : min= 4508, max= 4708, avg=4608.00, stdev=141.42, samples=2 00:40:33.710 lat (msec) : 2=0.01%, 10=7.75%, 20=87.39%, 50=4.85% 00:40:33.710 cpu : usr=2.50%, sys=6.89%, ctx=355, majf=0, minf=1 00:40:33.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:33.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.710 issued rwts: total=4608,4630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.710 job3: (groupid=0, jobs=1): err= 0: pid=610906: Sat Dec 14 22:49:54 2024 00:40:33.710 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:33.710 slat (nsec): min=1147, max=12077k, avg=113712.20, stdev=677329.45 00:40:33.710 clat (usec): min=3532, max=40075, avg=14509.09, stdev=4401.92 00:40:33.710 lat (usec): min=3537, max=40088, avg=14622.80, stdev=4424.06 00:40:33.710 clat percentiles (usec): 00:40:33.710 | 1.00th=[ 7373], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[12125], 00:40:33.710 | 30.00th=[12780], 40.00th=[13566], 50.00th=[13960], 60.00th=[14222], 00:40:33.710 | 70.00th=[14484], 80.00th=[15139], 90.00th=[18482], 95.00th=[23987], 00:40:33.710 | 99.00th=[31851], 99.50th=[34866], 99.90th=[35390], 99.95th=[36439], 00:40:33.710 | 99.99th=[40109] 00:40:33.710 write: IOPS=4614, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:40:33.710 slat (nsec): min=1856, max=11203k, avg=98181.09, stdev=529868.64 00:40:33.710 clat (usec): min=1450, max=24108, avg=12993.21, stdev=1910.87 00:40:33.710 lat (usec): min=3296, max=25892, avg=13091.39, stdev=1928.24 00:40:33.710 clat percentiles (usec): 00:40:33.710 | 1.00th=[ 6783], 5.00th=[10290], 10.00th=[10945], 20.00th=[11338], 00:40:33.710 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:40:33.710 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[15008], 00:40:33.710 | 99.00th=[19006], 99.50th=[19006], 99.90th=[20317], 99.95th=[22938], 00:40:33.710 | 99.99th=[23987] 00:40:33.710 bw ( KiB/s): min=16384, max=20480, per=23.68%, avg=18432.00, stdev=2896.31, samples=2 00:40:33.710 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:40:33.710 lat (msec) : 2=0.01%, 4=0.35%, 10=4.07%, 20=91.32%, 50=4.26% 00:40:33.710 cpu : usr=1.70%, sys=4.59%, ctx=504, majf=0, minf=1 00:40:33.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:33.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.710 issued rwts: total=4608,4628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.710 00:40:33.710 Run status group 0 (all jobs): 00:40:33.710 READ: bw=73.7MiB/s (77.3MB/s), 17.9MiB/s-19.9MiB/s (18.8MB/s-20.9MB/s), io=74.0MiB (77.6MB), run=1003-1004msec 00:40:33.710 WRITE: bw=76.0MiB/s (79.7MB/s), 18.0MiB/s-21.0MiB/s (18.9MB/s-22.1MB/s), io=76.3MiB (80.0MB), run=1003-1004msec 00:40:33.710 00:40:33.710 Disk stats (read/write): 00:40:33.710 nvme0n1: ios=4311/4608, merge=0/0, ticks=26932/26969, in_queue=53901, util=86.77% 00:40:33.710 nvme0n2: ios=3842/4096, merge=0/0, ticks=16620/17886, in_queue=34506, util=99.29% 00:40:33.710 nvme0n3: ios=3701/4096, merge=0/0, ticks=27331/28695, in_queue=56026, util=89.05% 00:40:33.710 nvme0n4: ios=3815/4096, merge=0/0, ticks=19480/19014, in_queue=38494, util=98.11% 00:40:33.710 22:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:33.710 22:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=610985 00:40:33.710 22:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:33.710 22:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:33.710 [global] 00:40:33.710 thread=1 00:40:33.710 invalidate=1 00:40:33.710 rw=read 00:40:33.710 time_based=1 00:40:33.710 runtime=10 00:40:33.710 ioengine=libaio 00:40:33.710 direct=1 00:40:33.710 bs=4096 00:40:33.710 iodepth=1 00:40:33.710 norandommap=1 00:40:33.710 numjobs=1 00:40:33.710 00:40:33.710 [job0] 00:40:33.710 filename=/dev/nvme0n1 00:40:33.710 [job1] 00:40:33.710 filename=/dev/nvme0n2 00:40:33.710 [job2] 00:40:33.710 filename=/dev/nvme0n3 00:40:33.710 [job3] 00:40:33.710 filename=/dev/nvme0n4 00:40:33.710 Could not set queue depth (nvme0n1) 00:40:33.710 Could not set queue depth (nvme0n2) 00:40:33.710 Could not set queue depth (nvme0n3) 00:40:33.710 Could not set queue depth (nvme0n4) 00:40:33.969 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.969 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.969 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.969 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.969 fio-3.35 00:40:33.969 Starting 4 threads 00:40:36.501 22:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:36.760 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34131968, buflen=4096 00:40:36.760 fio: pid=611288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:36.760 22:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:37.019 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38096896, buflen=4096 00:40:37.019 fio: pid=611287, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:37.019 22:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.019 22:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:37.278 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.278 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:37.278 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=37900288, buflen=4096 00:40:37.278 fio: pid=611283, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:40:37.537 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51998720, buflen=4096 00:40:37.537 fio: pid=611284, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:37.537 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.537 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:37.537 00:40:37.537 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=611283: Sat Dec 14 22:49:58 2024 00:40:37.537 read: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(36.1MiB/3110msec) 00:40:37.537 slat (usec): min=6, max=6991, avg= 8.91, stdev=72.62 00:40:37.537 clat (usec): min=190, max=42016, avg=325.18, stdev=1687.22 00:40:37.537 lat (usec): min=198, max=42078, avg=333.33, stdev=1687.52 00:40:37.537 clat percentiles (usec): 00:40:37.537 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:40:37.537 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:40:37.537 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:40:37.537 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[40633], 99.95th=[41157], 00:40:37.537 | 99.99th=[42206] 00:40:37.537 bw ( KiB/s): min= 9030, max=15576, per=25.33%, avg=12169.00, stdev=2976.00, samples=6 00:40:37.537 iops : min= 2257, max= 3894, avg=3042.17, stdev=744.11, samples=6 00:40:37.537 lat (usec) : 250=45.95%, 500=53.72%, 750=0.15% 00:40:37.537 lat (msec) : 50=0.17% 00:40:37.537 cpu : usr=1.58%, sys=5.08%, ctx=9255, majf=0, minf=1 00:40:37.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.537 issued rwts: total=9254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.537 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611284: Sat Dec 14 22:49:58 2024 00:40:37.537 read: IOPS=3851, BW=15.0MiB/s (15.8MB/s)(49.6MiB/3296msec) 00:40:37.537 slat (usec): min=6, max=9168, avg=11.87, stdev=176.07 00:40:37.537 clat (usec): min=173, max=40897, avg=244.21, stdev=364.89 00:40:37.537 lat (usec): min=185, max=40905, avg=256.08, stdev=406.72 00:40:37.537 clat percentiles (usec): 00:40:37.537 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 215], 00:40:37.537 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:40:37.537 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 326], 00:40:37.537 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 529], 00:40:37.537 | 99.99th=[ 1680] 00:40:37.537 bw ( KiB/s): min=13552, max=17499, per=32.14%, avg=15439.17, stdev=1305.53, samples=6 00:40:37.537 iops : min= 3388, max= 4374, avg=3859.67, stdev=326.15, samples=6 00:40:37.537 lat (usec) : 250=75.53%, 500=23.58%, 750=0.85%, 1000=0.01% 00:40:37.537 lat (msec) : 2=0.02%, 50=0.01% 00:40:37.537 cpu : usr=2.28%, sys=6.01%, ctx=12704, majf=0, minf=2 00:40:37.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.537 issued rwts: total=12696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.538 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611287: Sat Dec 14 22:49:58 2024 00:40:37.538 read: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(36.3MiB/2876msec) 00:40:37.538 slat (nsec): min=3664, max=52586, avg=8235.88, stdev=1512.41 00:40:37.538 clat (usec): min=193, max=41395, avg=297.16, stdev=1115.61 00:40:37.538 lat (usec): min=201, max=41403, avg=305.40, stdev=1115.60 00:40:37.538 clat percentiles (usec): 00:40:37.538 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:40:37.538 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:40:37.538 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 318], 95.00th=[ 392], 00:40:37.538 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 922], 99.95th=[40633], 00:40:37.538 | 99.99th=[41157] 00:40:37.538 bw ( KiB/s): min=10968, max=15208, per=26.38%, avg=12672.00, stdev=1616.69, samples=5 00:40:37.538 iops : min= 2742, max= 3802, avg=3168.00, stdev=404.17, samples=5 00:40:37.538 lat (usec) : 250=47.95%, 500=51.04%, 750=0.89%, 1000=0.01% 00:40:37.538 lat (msec) : 2=0.02%, 50=0.08% 00:40:37.538 cpu : usr=1.11%, sys=3.20%, ctx=9303, majf=0, minf=2 00:40:37.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.538 issued rwts: total=9302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.538 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611288: Sat Dec 14 22:49:58 2024 00:40:37.538 read: IOPS=3122, BW=12.2MiB/s (12.8MB/s)(32.6MiB/2669msec) 00:40:37.538 slat (nsec): min=7092, max=46730, avg=8856.84, stdev=2300.03 00:40:37.538 clat (usec): min=189, max=41356, avg=306.80, stdev=1430.85 00:40:37.538 lat (usec): min=202, max=41387, avg=315.66, stdev=1431.02 00:40:37.538 clat percentiles (usec): 00:40:37.538 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:40:37.538 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:40:37.538 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 318], 00:40:37.538 | 99.00th=[ 371], 99.50th=[ 400], 99.90th=[40633], 99.95th=[41157], 00:40:37.538 | 99.99th=[41157] 00:40:37.538 bw ( KiB/s): min= 2864, max=16048, per=26.13%, avg=12550.40, stdev=5472.21, samples=5 00:40:37.538 iops : min= 716, max= 4012, avg=3137.60, stdev=1368.05, samples=5 00:40:37.538 lat (usec) : 250=55.14%, 500=44.66%, 1000=0.01% 00:40:37.538 lat (msec) : 2=0.05%, 50=0.13% 00:40:37.538 cpu : usr=2.06%, sys=4.91%, ctx=8335, majf=0, minf=2 00:40:37.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.538 issued rwts: total=8334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.538 00:40:37.538 Run status group 0 (all jobs): 00:40:37.538 READ: bw=46.9MiB/s (49.2MB/s), 11.6MiB/s-15.0MiB/s (12.2MB/s-15.8MB/s), io=155MiB (162MB), run=2669-3296msec 00:40:37.538 00:40:37.538 Disk stats (read/write): 00:40:37.538 nvme0n1: ios=9251/0, merge=0/0, ticks=2809/0, in_queue=2809, util=94.30% 00:40:37.538 nvme0n2: ios=11820/0, merge=0/0, ticks=2757/0, in_queue=2757, util=94.57% 00:40:37.538 nvme0n3: ios=9124/0, merge=0/0, ticks=2669/0, in_queue=2669, util=96.17% 00:40:37.538 nvme0n4: ios=8074/0, merge=0/0, ticks=3245/0, in_queue=3245, util=98.83% 00:40:37.797 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.797 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:37.797 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.797 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:38.056 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.056 22:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:38.315 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.315 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 610985 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:38.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:38.574 nvmf hotplug test: fio failed as expected 00:40:38.574 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.833 rmmod nvme_tcp 00:40:38.833 rmmod nvme_fabrics 00:40:38.833 rmmod nvme_keyring 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 608522 ']' 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 608522 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 608522 ']' 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 608522 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:38.833 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608522 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608522' 00:40:39.092 killing process with pid 608522 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 608522 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 608522 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:39.092 22:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.627 00:40:41.627 real 0m25.757s 00:40:41.627 user 1m31.907s 00:40:41.627 sys 0m11.776s 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:41.627 ************************************ 00:40:41.627 END TEST nvmf_fio_target 00:40:41.627 ************************************ 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.627 22:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.627 ************************************ 00:40:41.627 START TEST nvmf_bdevio 00:40:41.627 ************************************ 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:41.627 * Looking for test storage... 00:40:41.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.627 --rc genhtml_branch_coverage=1 00:40:41.627 --rc genhtml_function_coverage=1 00:40:41.627 --rc genhtml_legend=1 00:40:41.627 --rc geninfo_all_blocks=1 00:40:41.627 --rc geninfo_unexecuted_blocks=1 00:40:41.627 00:40:41.627 ' 00:40:41.627 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.627 --rc genhtml_branch_coverage=1 00:40:41.627 --rc genhtml_function_coverage=1 00:40:41.627 --rc genhtml_legend=1 00:40:41.627 --rc geninfo_all_blocks=1 00:40:41.628 --rc geninfo_unexecuted_blocks=1 00:40:41.628 00:40:41.628 ' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.628 --rc genhtml_branch_coverage=1 00:40:41.628 --rc genhtml_function_coverage=1 00:40:41.628 --rc genhtml_legend=1 00:40:41.628 --rc geninfo_all_blocks=1 00:40:41.628 --rc geninfo_unexecuted_blocks=1 00:40:41.628 00:40:41.628 ' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.628 --rc genhtml_branch_coverage=1 00:40:41.628 --rc genhtml_function_coverage=1 00:40:41.628 --rc genhtml_legend=1 00:40:41.628 --rc geninfo_all_blocks=1 00:40:41.628 --rc geninfo_unexecuted_blocks=1 00:40:41.628 00:40:41.628 ' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.628 22:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.915 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:47.174 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:47.174 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:47.174 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:47.175 Found net devices under 0000:af:00.0: cvl_0_0 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:47.175 Found net devices under 0000:af:00.1: cvl_0_1 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:47.175 22:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:47.175 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:47.175 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:47.175 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:47.175 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:47.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:47.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:40:47.434 00:40:47.434 --- 10.0.0.2 ping statistics --- 00:40:47.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.434 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:47.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:47.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:40:47.434 00:40:47.434 --- 10.0.0.1 ping statistics --- 00:40:47.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.434 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=615438 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 615438 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:47.434 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 615438 ']' 00:40:47.435 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:47.435 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:47.435 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:47.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:47.435 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:47.435 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.435 [2024-12-14 22:50:08.240082] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:47.435 [2024-12-14 22:50:08.240975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:47.435 [2024-12-14 22:50:08.241016] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:47.694 [2024-12-14 22:50:08.318979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:47.694 [2024-12-14 22:50:08.341623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:47.694 [2024-12-14 22:50:08.341659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:47.694 [2024-12-14 22:50:08.341667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:47.694 [2024-12-14 22:50:08.341673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:47.694 [2024-12-14 22:50:08.341678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:47.694 [2024-12-14 22:50:08.343036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:40:47.694 [2024-12-14 22:50:08.343148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:40:47.694 [2024-12-14 22:50:08.343255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:47.694 [2024-12-14 22:50:08.343257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:40:47.694 [2024-12-14 22:50:08.405095] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:47.694 [2024-12-14 22:50:08.406245] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:47.694 [2024-12-14 22:50:08.406282] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:47.694 [2024-12-14 22:50:08.406755] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:47.694 [2024-12-14 22:50:08.406771] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 [2024-12-14 22:50:08.467948] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 Malloc0 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:47.694 [2024-12-14 22:50:08.556197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:47.694 { 00:40:47.694 "params": { 00:40:47.694 "name": "Nvme$subsystem", 00:40:47.694 "trtype": "$TEST_TRANSPORT", 00:40:47.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.694 "adrfam": "ipv4", 00:40:47.694 "trsvcid": "$NVMF_PORT", 00:40:47.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.694 "hdgst": ${hdgst:-false}, 00:40:47.694 "ddgst": ${ddgst:-false} 00:40:47.694 }, 00:40:47.694 "method": "bdev_nvme_attach_controller" 00:40:47.694 } 00:40:47.694 EOF 00:40:47.694 )") 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:47.694 22:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:47.694 "params": { 00:40:47.694 "name": "Nvme1", 00:40:47.694 "trtype": "tcp", 00:40:47.694 "traddr": "10.0.0.2", 00:40:47.694 "adrfam": "ipv4", 00:40:47.694 "trsvcid": "4420", 00:40:47.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:47.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:47.694 "hdgst": false, 00:40:47.694 "ddgst": false 00:40:47.694 }, 00:40:47.694 "method": "bdev_nvme_attach_controller" 00:40:47.694 }' 00:40:47.953 [2024-12-14 22:50:08.604375] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:47.953 [2024-12-14 22:50:08.604421] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615472 ] 00:40:47.953 [2024-12-14 22:50:08.678173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:47.953 [2024-12-14 22:50:08.703086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:47.953 [2024-12-14 22:50:08.703196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.953 [2024-12-14 22:50:08.703197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.212 I/O targets: 00:40:48.212 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:48.212 00:40:48.212 00:40:48.212 CUnit - A unit testing framework for C - Version 2.1-3 00:40:48.212 http://cunit.sourceforge.net/ 00:40:48.212 00:40:48.212 00:40:48.212 Suite: bdevio tests on: Nvme1n1 00:40:48.212 Test: blockdev write read block ...passed 00:40:48.212 Test: blockdev write zeroes read block ...passed 00:40:48.212 Test: blockdev write zeroes read no split ...passed 00:40:48.212 Test: blockdev write zeroes read split ...passed 00:40:48.212 Test: blockdev write zeroes read split partial ...passed 00:40:48.212 Test: blockdev reset ...[2024-12-14 22:50:09.036179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:48.212 [2024-12-14 22:50:09.036238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1b630 (9): Bad file descriptor 00:40:48.212 [2024-12-14 22:50:09.080865] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:48.212 passed 00:40:48.212 Test: blockdev write read 8 blocks ...passed 00:40:48.212 Test: blockdev write read size > 128k ...passed 00:40:48.212 Test: blockdev write read invalid size ...passed 00:40:48.471 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:48.471 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:48.471 Test: blockdev write read max offset ...passed 00:40:48.471 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:48.471 Test: blockdev writev readv 8 blocks ...passed 00:40:48.471 Test: blockdev writev readv 30 x 1block ...passed 00:40:48.471 Test: blockdev writev readv block ...passed 00:40:48.471 Test: blockdev writev readv size > 128k ...passed 00:40:48.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:48.471 Test: blockdev comparev and writev ...[2024-12-14 22:50:09.290935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.290972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.290988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.290997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:48.471 [2024-12-14 22:50:09.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:48.471 [2024-12-14 22:50:09.291958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:48.471 passed 00:40:48.729 Test: blockdev nvme passthru rw ...passed 00:40:48.729 Test: blockdev nvme passthru vendor specific ...[2024-12-14 22:50:09.374270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:48.729 [2024-12-14 22:50:09.374289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:48.729 [2024-12-14 22:50:09.374400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:48.729 [2024-12-14 22:50:09.374410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:48.729 [2024-12-14 22:50:09.374512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:48.729 [2024-12-14 22:50:09.374522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:48.729 [2024-12-14 22:50:09.374625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:48.729 [2024-12-14 22:50:09.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:48.729 passed 00:40:48.729 Test: blockdev nvme admin passthru ...passed 00:40:48.729 Test: blockdev copy ...passed 00:40:48.729 00:40:48.729 Run Summary: Type Total Ran Passed Failed Inactive 00:40:48.729 suites 1 1 n/a 0 0 00:40:48.729 tests 23 23 23 0 0 00:40:48.729 asserts 152 152 152 0 n/a 00:40:48.729 00:40:48.729 Elapsed time = 1.084 seconds 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:48.729 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:48.729 rmmod nvme_tcp 00:40:48.729 rmmod nvme_fabrics 00:40:48.729 rmmod nvme_keyring 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 615438 ']' 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 615438 ']' 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615438' 00:40:48.988 killing process with pid 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 615438 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:48.988 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:49.247 22:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:51.154 00:40:51.154 real 0m9.914s 00:40:51.154 user 0m8.333s 00:40:51.154 sys 0m5.190s 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.154 ************************************ 00:40:51.154 END TEST nvmf_bdevio 00:40:51.154 ************************************ 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:51.154 00:40:51.154 real 4m30.211s 00:40:51.154 user 9m6.175s 00:40:51.154 sys 1m50.536s 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.154 22:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:51.154 ************************************ 00:40:51.154 END TEST nvmf_target_core_interrupt_mode 00:40:51.154 ************************************ 00:40:51.154 22:50:12 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:51.154 22:50:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:51.154 22:50:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.154 22:50:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.414 ************************************ 00:40:51.414 START TEST nvmf_interrupt 00:40:51.414 ************************************ 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:51.414 * Looking for test storage... 00:40:51.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:51.414 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.415 --rc genhtml_branch_coverage=1 00:40:51.415 --rc genhtml_function_coverage=1 00:40:51.415 --rc genhtml_legend=1 00:40:51.415 --rc geninfo_all_blocks=1 00:40:51.415 --rc geninfo_unexecuted_blocks=1 00:40:51.415 00:40:51.415 ' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.415 --rc genhtml_branch_coverage=1 00:40:51.415 --rc genhtml_function_coverage=1 00:40:51.415 --rc genhtml_legend=1 00:40:51.415 --rc geninfo_all_blocks=1 00:40:51.415 --rc geninfo_unexecuted_blocks=1 00:40:51.415 00:40:51.415 ' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.415 --rc genhtml_branch_coverage=1 00:40:51.415 --rc genhtml_function_coverage=1 00:40:51.415 --rc genhtml_legend=1 00:40:51.415 --rc geninfo_all_blocks=1 00:40:51.415 --rc geninfo_unexecuted_blocks=1 00:40:51.415 00:40:51.415 ' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.415 --rc genhtml_branch_coverage=1 00:40:51.415 --rc genhtml_function_coverage=1 00:40:51.415 --rc genhtml_legend=1 00:40:51.415 --rc geninfo_all_blocks=1 00:40:51.415 --rc geninfo_unexecuted_blocks=1 00:40:51.415 00:40:51.415 ' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:51.415 22:50:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:57.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:57.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.984 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:57.985 Found net devices under 0000:af:00.0: cvl_0_0 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:57.985 Found net devices under 0000:af:00.1: cvl_0_1 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.985 22:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:57.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:40:57.985 00:40:57.985 --- 10.0.0.2 ping statistics --- 00:40:57.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.985 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:40:57.985 00:40:57.985 --- 10.0.0.1 ping statistics --- 00:40:57.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.985 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=619169 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 619169 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 619169 ']' 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.985 [2024-12-14 22:50:18.250857] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:57.985 [2024-12-14 22:50:18.251754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:57.985 [2024-12-14 22:50:18.251785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:57.985 [2024-12-14 22:50:18.329735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:57.985 [2024-12-14 22:50:18.351395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:57.985 [2024-12-14 22:50:18.351432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:57.985 [2024-12-14 22:50:18.351439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:57.985 [2024-12-14 22:50:18.351445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:57.985 [2024-12-14 22:50:18.351450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:57.985 [2024-12-14 22:50:18.352543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.985 [2024-12-14 22:50:18.352545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.985 [2024-12-14 22:50:18.414793] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:57.985 [2024-12-14 22:50:18.415242] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:57.985 [2024-12-14 22:50:18.415515] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:57.985 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:57.986 5000+0 records in 00:40:57.986 5000+0 records out 00:40:57.986 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179467 s, 571 MB/s 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.986 AIO0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.986 [2024-12-14 22:50:18.541412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:57.986 [2024-12-14 22:50:18.577628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619169 0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 0 idle 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619169 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619169 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619169 1 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 1 idle 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:40:57.986 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619174 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619174 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:58.245 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=619215 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619169 0 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619169 0 busy 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:40:58.246 22:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619169 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.23 reactor_0' 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619169 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.23 reactor_0 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:58.505 22:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619169 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0' 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619169 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.443 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619169 1 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619169 1 busy 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:59.702 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619174 root 20 0 128.2g 46848 33792 R 93.3 0.1 0:01.33 reactor_1' 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619174 root 20 0 128.2g 46848 33792 R 93.3 0.1 0:01.33 reactor_1 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.703 22:50:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 619215 00:41:09.684 Initializing NVMe Controllers 00:41:09.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:09.684 Controller IO queue size 256, less than required. 00:41:09.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:09.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:09.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:09.684 Initialization complete. Launching workers. 00:41:09.685 ======================================================== 00:41:09.685 Latency(us) 00:41:09.685 Device Information : IOPS MiB/s Average min max 00:41:09.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16526.34 64.56 15499.61 2857.77 29449.32 00:41:09.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16382.54 63.99 15631.54 7522.05 27929.58 00:41:09.685 ======================================================== 00:41:09.685 Total : 32908.88 128.55 15565.29 2857.77 29449.32 00:41:09.685 00:41:09.685 [2024-12-14 22:50:29.198761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07bd0 is same with the state(6) to be set 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619169 0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 0 idle 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619169 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.23 reactor_0' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619169 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.23 reactor_0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619169 1 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 1 idle 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619174 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619174 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:09.685 22:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:11.590 22:50:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619169 0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 0 idle 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619169 root 20 0 128.2g 72960 33792 S 6.7 0.1 0:20.48 reactor_0' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619169 root 20 0 128.2g 72960 33792 S 6.7 0.1 0:20.48 reactor_0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619169 1 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619169 1 idle 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619169 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619169 -w 256 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619174 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619174 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.590 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:11.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:11.850 rmmod nvme_tcp 00:41:11.850 rmmod nvme_fabrics 00:41:11.850 rmmod nvme_keyring 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 619169 ']' 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 619169 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 619169 ']' 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 619169 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:11.850 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619169 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619169' 00:41:12.110 killing process with pid 619169 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 619169 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 619169 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:12.110 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:12.369 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:12.369 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:12.369 22:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:12.369 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:12.369 22:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:14.274 22:50:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:14.274 00:41:14.274 real 0m23.007s 00:41:14.274 user 0m39.828s 00:41:14.274 sys 0m8.387s 00:41:14.274 22:50:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.274 22:50:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:14.274 ************************************ 00:41:14.274 END TEST nvmf_interrupt 00:41:14.274 ************************************ 00:41:14.274 00:41:14.274 real 35m28.478s 00:41:14.274 user 86m22.407s 00:41:14.274 sys 10m20.575s 00:41:14.274 22:50:35 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.274 22:50:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.274 ************************************ 00:41:14.274 END TEST nvmf_tcp 00:41:14.274 ************************************ 00:41:14.274 22:50:35 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:14.274 22:50:35 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:14.274 22:50:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:14.274 22:50:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.274 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:41:14.533 ************************************ 00:41:14.533 START TEST spdkcli_nvmf_tcp 00:41:14.533 ************************************ 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:14.533 * Looking for test storage... 00:41:14.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:14.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.533 --rc genhtml_branch_coverage=1 00:41:14.533 --rc genhtml_function_coverage=1 00:41:14.533 --rc genhtml_legend=1 00:41:14.533 --rc geninfo_all_blocks=1 00:41:14.533 --rc geninfo_unexecuted_blocks=1 00:41:14.533 00:41:14.533 ' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:14.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.533 --rc genhtml_branch_coverage=1 00:41:14.533 --rc genhtml_function_coverage=1 00:41:14.533 --rc genhtml_legend=1 00:41:14.533 --rc geninfo_all_blocks=1 00:41:14.533 --rc geninfo_unexecuted_blocks=1 00:41:14.533 00:41:14.533 ' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:14.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.533 --rc genhtml_branch_coverage=1 00:41:14.533 --rc genhtml_function_coverage=1 00:41:14.533 --rc genhtml_legend=1 00:41:14.533 --rc geninfo_all_blocks=1 00:41:14.533 --rc geninfo_unexecuted_blocks=1 00:41:14.533 00:41:14.533 ' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:14.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.533 --rc genhtml_branch_coverage=1 00:41:14.533 --rc genhtml_function_coverage=1 00:41:14.533 --rc genhtml_legend=1 00:41:14.533 --rc geninfo_all_blocks=1 00:41:14.533 --rc geninfo_unexecuted_blocks=1 00:41:14.533 00:41:14.533 ' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:14.533 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:14.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=621970 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 621970 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 621970 ']' 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:14.534 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.792 [2024-12-14 22:50:35.443832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:14.792 [2024-12-14 22:50:35.443881] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621970 ] 00:41:14.792 [2024-12-14 22:50:35.517984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.792 [2024-12-14 22:50:35.541840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.792 [2024-12-14 22:50:35.541842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:14.792 22:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.051 22:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:15.051 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:15.051 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:15.051 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:15.051 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:15.051 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:15.051 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:15.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:15.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:15.052 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:15.052 ' 00:41:17.587 [2024-12-14 22:50:38.408169] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:18.965 [2024-12-14 22:50:39.740509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:21.500 [2024-12-14 22:50:42.224193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:24.034 [2024-12-14 22:50:44.374886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:25.410 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:25.410 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:25.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:25.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:25.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:25.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:25.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:25.411 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:25.411 22:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.091 22:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:26.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:26.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:26.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:26.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:26.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:26.091 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:26.091 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:26.091 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:26.091 ' 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:31.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:31.426 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:31.426 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:31.426 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:31.426 22:50:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:31.426 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:31.426 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621970 ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621970' 00:41:31.686 killing process with pid 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 621970 ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 621970 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621970 ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621970 00:41:31.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (621970) - No such process 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 621970 is not found' 00:41:31.686 Process with pid 621970 is not found 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:31.686 00:41:31.686 real 0m17.348s 00:41:31.686 user 0m38.170s 00:41:31.686 sys 0m0.891s 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.686 22:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:31.686 ************************************ 00:41:31.686 END TEST spdkcli_nvmf_tcp 00:41:31.686 ************************************ 00:41:31.686 22:50:52 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:31.686 22:50:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:31.686 22:50:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.686 22:50:52 -- common/autotest_common.sh@10 -- # set +x 00:41:31.946 ************************************ 00:41:31.946 START TEST nvmf_identify_passthru 00:41:31.946 ************************************ 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:31.946 * Looking for test storage... 00:41:31.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:31.946 22:50:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:31.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.946 --rc genhtml_branch_coverage=1 00:41:31.946 --rc genhtml_function_coverage=1 00:41:31.946 --rc genhtml_legend=1 00:41:31.946 --rc geninfo_all_blocks=1 00:41:31.946 --rc geninfo_unexecuted_blocks=1 00:41:31.946 00:41:31.946 ' 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:31.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.946 --rc genhtml_branch_coverage=1 00:41:31.946 --rc genhtml_function_coverage=1 00:41:31.946 --rc genhtml_legend=1 00:41:31.946 --rc geninfo_all_blocks=1 00:41:31.946 --rc geninfo_unexecuted_blocks=1 00:41:31.946 00:41:31.946 ' 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:31.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.946 --rc genhtml_branch_coverage=1 00:41:31.946 --rc genhtml_function_coverage=1 00:41:31.946 --rc genhtml_legend=1 00:41:31.946 --rc geninfo_all_blocks=1 00:41:31.946 --rc geninfo_unexecuted_blocks=1 00:41:31.946 00:41:31.946 ' 00:41:31.946 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:31.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.946 --rc genhtml_branch_coverage=1 00:41:31.946 --rc genhtml_function_coverage=1 00:41:31.946 --rc genhtml_legend=1 00:41:31.946 --rc geninfo_all_blocks=1 00:41:31.946 --rc geninfo_unexecuted_blocks=1 00:41:31.946 00:41:31.946 ' 00:41:31.946 22:50:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:31.946 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:31.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:31.947 22:50:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:31.947 22:50:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:31.947 22:50:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.947 22:50:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.947 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:31.947 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:31.947 22:50:52 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:31.947 22:50:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.518 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:38.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:38.519 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:38.519 Found net devices under 0000:af:00.0: cvl_0_0 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:38.519 Found net devices under 0000:af:00.1: cvl_0_1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:41:38.519 00:41:38.519 --- 10.0.0.2 ping statistics --- 00:41:38.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.519 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:41:38.519 00:41:38.519 --- 10.0.0.1 ping statistics --- 00:41:38.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.519 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:38.519 22:50:58 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:38.519 22:50:58 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:38.519 22:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:42.765 22:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:42.765 22:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:42.765 22:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:42.765 22:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:46.953 22:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:46.953 22:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:46.953 22:51:06 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.953 22:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=629395 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 629395 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 629395 ']' 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.953 [2024-12-14 22:51:07.087803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:46.953 [2024-12-14 22:51:07.087850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.953 [2024-12-14 22:51:07.166879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.953 [2024-12-14 22:51:07.190404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.953 [2024-12-14 22:51:07.190442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.953 [2024-12-14 22:51:07.190449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.953 [2024-12-14 22:51:07.190455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.953 [2024-12-14 22:51:07.190460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.953 [2024-12-14 22:51:07.191774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.953 [2024-12-14 22:51:07.191886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:46.953 [2024-12-14 22:51:07.192012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.953 [2024-12-14 22:51:07.192013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.953 INFO: Log level set to 20 00:41:46.953 INFO: Requests: 00:41:46.953 { 00:41:46.953 "jsonrpc": "2.0", 00:41:46.953 "method": "nvmf_set_config", 00:41:46.953 "id": 1, 00:41:46.953 "params": { 00:41:46.953 "admin_cmd_passthru": { 00:41:46.953 "identify_ctrlr": true 00:41:46.953 } 00:41:46.953 } 00:41:46.953 } 00:41:46.953 00:41:46.953 INFO: response: 00:41:46.953 { 00:41:46.953 "jsonrpc": "2.0", 00:41:46.953 "id": 1, 00:41:46.953 "result": true 00:41:46.953 } 00:41:46.953 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.953 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.953 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.953 INFO: Setting log level to 20 00:41:46.953 INFO: Setting log level to 20 00:41:46.953 INFO: Log level set to 20 00:41:46.953 INFO: Log level set to 20 00:41:46.954 INFO: Requests: 00:41:46.954 { 00:41:46.954 "jsonrpc": "2.0", 00:41:46.954 "method": "framework_start_init", 00:41:46.954 "id": 1 00:41:46.954 } 00:41:46.954 00:41:46.954 INFO: Requests: 00:41:46.954 { 00:41:46.954 "jsonrpc": "2.0", 00:41:46.954 "method": "framework_start_init", 00:41:46.954 "id": 1 00:41:46.954 } 00:41:46.954 00:41:46.954 [2024-12-14 22:51:07.311480] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:46.954 INFO: response: 00:41:46.954 { 00:41:46.954 "jsonrpc": "2.0", 00:41:46.954 "id": 1, 00:41:46.954 "result": true 00:41:46.954 } 00:41:46.954 00:41:46.954 INFO: response: 00:41:46.954 { 00:41:46.954 "jsonrpc": "2.0", 00:41:46.954 "id": 1, 00:41:46.954 "result": true 00:41:46.954 } 00:41:46.954 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.954 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.954 INFO: Setting log level to 40 00:41:46.954 INFO: Setting log level to 40 00:41:46.954 INFO: Setting log level to 40 00:41:46.954 [2024-12-14 22:51:07.324776] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.954 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:46.954 22:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.954 22:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.488 Nvme0n1 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.488 [2024-12-14 22:51:10.229065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.488 [ 00:41:49.488 { 00:41:49.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:49.488 "subtype": "Discovery", 00:41:49.488 "listen_addresses": [], 00:41:49.488 "allow_any_host": true, 00:41:49.488 "hosts": [] 00:41:49.488 }, 00:41:49.488 { 00:41:49.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:49.488 "subtype": "NVMe", 00:41:49.488 "listen_addresses": [ 00:41:49.488 { 00:41:49.488 "trtype": "TCP", 00:41:49.488 "adrfam": "IPv4", 00:41:49.488 "traddr": "10.0.0.2", 00:41:49.488 "trsvcid": "4420" 00:41:49.488 } 00:41:49.488 ], 00:41:49.488 "allow_any_host": true, 00:41:49.488 "hosts": [], 00:41:49.488 "serial_number": "SPDK00000000000001", 00:41:49.488 "model_number": "SPDK bdev Controller", 00:41:49.488 "max_namespaces": 1, 00:41:49.488 "min_cntlid": 1, 00:41:49.488 "max_cntlid": 65519, 00:41:49.488 "namespaces": [ 00:41:49.488 { 00:41:49.488 "nsid": 1, 00:41:49.488 "bdev_name": "Nvme0n1", 00:41:49.488 "name": "Nvme0n1", 00:41:49.488 "nguid": "B847E9CF034948E58D4E8D5BE56849A4", 00:41:49.488 "uuid": "b847e9cf-0349-48e5-8d4e-8d5be56849a4" 00:41:49.488 } 00:41:49.488 ] 00:41:49.488 } 00:41:49.488 ] 00:41:49.488 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:49.488 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:49.747 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.747 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.747 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:49.747 22:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:49.748 rmmod nvme_tcp 00:41:49.748 rmmod nvme_fabrics 00:41:49.748 rmmod nvme_keyring 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 629395 ']' 00:41:49.748 22:51:10 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 629395 00:41:49.748 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 629395 ']' 00:41:49.748 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 629395 00:41:49.748 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:49.748 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:49.748 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629395 00:41:50.006 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:50.006 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:50.006 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629395' 00:41:50.006 killing process with pid 629395 00:41:50.006 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 629395 00:41:50.006 22:51:10 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 629395 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:51.384 22:51:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.384 22:51:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:51.384 22:51:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.919 22:51:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:53.919 00:41:53.919 real 0m21.585s 00:41:53.919 user 0m27.095s 00:41:53.919 sys 0m5.265s 00:41:53.919 22:51:14 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:53.919 22:51:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:53.919 ************************************ 00:41:53.919 END TEST nvmf_identify_passthru 00:41:53.919 ************************************ 00:41:53.919 22:51:14 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:53.919 22:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:53.919 22:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:53.919 22:51:14 -- common/autotest_common.sh@10 -- # set +x 00:41:53.919 ************************************ 00:41:53.919 START TEST nvmf_dif 00:41:53.919 ************************************ 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:53.919 * Looking for test storage... 00:41:53.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:53.919 22:51:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:53.919 22:51:14 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.919 --rc genhtml_branch_coverage=1 00:41:53.920 --rc genhtml_function_coverage=1 00:41:53.920 --rc genhtml_legend=1 00:41:53.920 --rc geninfo_all_blocks=1 00:41:53.920 --rc geninfo_unexecuted_blocks=1 00:41:53.920 00:41:53.920 ' 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.920 --rc genhtml_branch_coverage=1 00:41:53.920 --rc genhtml_function_coverage=1 00:41:53.920 --rc genhtml_legend=1 00:41:53.920 --rc geninfo_all_blocks=1 00:41:53.920 --rc geninfo_unexecuted_blocks=1 00:41:53.920 00:41:53.920 ' 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.920 --rc genhtml_branch_coverage=1 00:41:53.920 --rc genhtml_function_coverage=1 00:41:53.920 --rc genhtml_legend=1 00:41:53.920 --rc geninfo_all_blocks=1 00:41:53.920 --rc geninfo_unexecuted_blocks=1 00:41:53.920 00:41:53.920 ' 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.920 --rc genhtml_branch_coverage=1 00:41:53.920 --rc genhtml_function_coverage=1 00:41:53.920 --rc genhtml_legend=1 00:41:53.920 --rc geninfo_all_blocks=1 00:41:53.920 --rc geninfo_unexecuted_blocks=1 00:41:53.920 00:41:53.920 ' 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:53.920 22:51:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.920 22:51:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.920 22:51:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.920 22:51:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.920 22:51:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.920 22:51:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.920 22:51:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.920 22:51:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:53.920 22:51:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:53.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:53.920 22:51:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:53.920 22:51:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:53.920 22:51:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:59.193 22:51:20 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:59.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:59.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:59.194 Found net devices under 0000:af:00.0: cvl_0_0 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:59.194 Found net devices under 0000:af:00.1: cvl_0_1 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:59.194 22:51:20 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:59.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:59.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:41:59.453 00:41:59.453 --- 10.0.0.2 ping statistics --- 00:41:59.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.453 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:59.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:59.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:41:59.453 00:41:59.453 --- 10.0.0.1 ping statistics --- 00:41:59.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.453 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:59.453 22:51:20 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:02.743 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:02.743 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:02.743 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:02.743 22:51:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:02.743 22:51:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=634945 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 634945 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 634945 ']' 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:02.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 [2024-12-14 22:51:23.208051] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:02.743 [2024-12-14 22:51:23.208099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:02.743 [2024-12-14 22:51:23.286762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.743 [2024-12-14 22:51:23.308325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:02.743 [2024-12-14 22:51:23.308363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:02.743 [2024-12-14 22:51:23.308369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:02.743 [2024-12-14 22:51:23.308375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:02.743 [2024-12-14 22:51:23.308380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:02.743 [2024-12-14 22:51:23.308893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 22:51:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:02.743 22:51:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:02.743 22:51:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 [2024-12-14 22:51:23.440493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.743 22:51:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 ************************************ 00:42:02.743 START TEST fio_dif_1_default 00:42:02.743 ************************************ 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.743 bdev_null0 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.743 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.744 [2024-12-14 22:51:23.512795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.744 { 00:42:02.744 "params": { 00:42:02.744 "name": "Nvme$subsystem", 00:42:02.744 "trtype": "$TEST_TRANSPORT", 00:42:02.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.744 "adrfam": "ipv4", 00:42:02.744 "trsvcid": "$NVMF_PORT", 00:42:02.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.744 "hdgst": ${hdgst:-false}, 00:42:02.744 "ddgst": ${ddgst:-false} 00:42:02.744 }, 00:42:02.744 "method": "bdev_nvme_attach_controller" 00:42:02.744 } 00:42:02.744 EOF 00:42:02.744 )") 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:02.744 "params": { 00:42:02.744 "name": "Nvme0", 00:42:02.744 "trtype": "tcp", 00:42:02.744 "traddr": "10.0.0.2", 00:42:02.744 "adrfam": "ipv4", 00:42:02.744 "trsvcid": "4420", 00:42:02.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:02.744 "hdgst": false, 00:42:02.744 "ddgst": false 00:42:02.744 }, 00:42:02.744 "method": "bdev_nvme_attach_controller" 00:42:02.744 }' 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:02.744 22:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.001 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:03.001 fio-3.35 00:42:03.002 Starting 1 thread 00:42:15.210 00:42:15.210 filename0: (groupid=0, jobs=1): err= 0: pid=635188: Sat Dec 14 22:51:34 2024 00:42:15.210 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10018msec) 00:42:15.210 slat (nsec): min=5904, max=25177, avg=6203.32, stdev=884.30 00:42:15.210 clat (usec): min=40825, max=45481, avg=41038.39, stdev=340.92 00:42:15.210 lat (usec): min=40832, max=45507, avg=41044.59, stdev=341.34 00:42:15.210 clat percentiles (usec): 00:42:15.210 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:15.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:15.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:15.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:42:15.210 | 99.99th=[45351] 00:42:15.210 bw ( KiB/s): min= 384, max= 416, per=99.56%, avg=388.80, stdev=11.72, samples=20 00:42:15.210 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:15.210 lat (msec) : 50=100.00% 00:42:15.210 cpu : usr=92.60%, sys=7.16%, ctx=15, majf=0, minf=0 00:42:15.210 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:15.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.210 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.210 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:15.210 00:42:15.210 Run status group 0 (all jobs): 00:42:15.210 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10018-10018msec 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.210 22:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 00:42:15.211 real 0m11.165s 00:42:15.211 user 0m15.877s 00:42:15.211 sys 0m1.014s 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 ************************************ 00:42:15.211 END TEST fio_dif_1_default 00:42:15.211 ************************************ 00:42:15.211 22:51:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:15.211 22:51:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:15.211 22:51:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 ************************************ 00:42:15.211 START TEST fio_dif_1_multi_subsystems 00:42:15.211 ************************************ 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 bdev_null0 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 [2024-12-14 22:51:34.748807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 bdev_null1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:15.211 { 00:42:15.211 "params": { 00:42:15.211 "name": "Nvme$subsystem", 00:42:15.211 "trtype": "$TEST_TRANSPORT", 00:42:15.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.211 "adrfam": "ipv4", 00:42:15.211 "trsvcid": "$NVMF_PORT", 00:42:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.211 "hdgst": ${hdgst:-false}, 00:42:15.211 "ddgst": ${ddgst:-false} 00:42:15.211 }, 00:42:15.211 "method": "bdev_nvme_attach_controller" 00:42:15.211 } 00:42:15.211 EOF 00:42:15.211 )") 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:15.211 { 00:42:15.211 "params": { 00:42:15.211 "name": "Nvme$subsystem", 00:42:15.211 "trtype": "$TEST_TRANSPORT", 00:42:15.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.211 "adrfam": "ipv4", 00:42:15.211 "trsvcid": "$NVMF_PORT", 00:42:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.211 "hdgst": ${hdgst:-false}, 00:42:15.211 "ddgst": ${ddgst:-false} 00:42:15.211 }, 00:42:15.211 "method": "bdev_nvme_attach_controller" 00:42:15.211 } 00:42:15.211 EOF 00:42:15.211 )") 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:15.211 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:15.211 "params": { 00:42:15.211 "name": "Nvme0", 00:42:15.211 "trtype": "tcp", 00:42:15.211 "traddr": "10.0.0.2", 00:42:15.211 "adrfam": "ipv4", 00:42:15.211 "trsvcid": "4420", 00:42:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:15.211 "hdgst": false, 00:42:15.211 "ddgst": false 00:42:15.211 }, 00:42:15.211 "method": "bdev_nvme_attach_controller" 00:42:15.211 },{ 00:42:15.211 "params": { 00:42:15.211 "name": "Nvme1", 00:42:15.212 "trtype": "tcp", 00:42:15.212 "traddr": "10.0.0.2", 00:42:15.212 "adrfam": "ipv4", 00:42:15.212 "trsvcid": "4420", 00:42:15.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:15.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:15.212 "hdgst": false, 00:42:15.212 "ddgst": false 00:42:15.212 }, 00:42:15.212 "method": "bdev_nvme_attach_controller" 00:42:15.212 }' 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:15.212 22:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.212 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:15.212 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:15.212 fio-3.35 00:42:15.212 Starting 2 threads 00:42:25.183 00:42:25.183 filename0: (groupid=0, jobs=1): err= 0: pid=637103: Sat Dec 14 22:51:45 2024 00:42:25.183 read: IOPS=191, BW=766KiB/s (785kB/s)(7680KiB/10023msec) 00:42:25.183 slat (nsec): min=6118, max=35171, avg=7325.85, stdev=2250.13 00:42:25.183 clat (usec): min=382, max=42694, avg=20859.44, stdev=20480.55 00:42:25.183 lat (usec): min=389, max=42701, avg=20866.76, stdev=20480.03 00:42:25.183 clat percentiles (usec): 00:42:25.183 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:42:25.183 | 30.00th=[ 429], 40.00th=[ 570], 50.00th=[ 988], 60.00th=[41157], 00:42:25.183 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:25.183 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:25.183 | 99.99th=[42730] 00:42:25.183 bw ( KiB/s): min= 672, max= 896, per=49.52%, avg=766.40, stdev=39.50, samples=20 00:42:25.183 iops : min= 168, max= 224, avg=191.60, stdev= 9.88, samples=20 00:42:25.183 lat (usec) : 500=35.94%, 750=13.02%, 1000=1.04% 00:42:25.183 lat (msec) : 2=0.21%, 50=49.79% 00:42:25.183 cpu : usr=96.77%, sys=2.99%, ctx=12, majf=0, minf=76 00:42:25.183 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.183 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.183 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:25.183 filename1: (groupid=0, jobs=1): err= 0: pid=637104: Sat Dec 14 22:51:45 2024 00:42:25.183 read: IOPS=195, BW=781KiB/s (799kB/s)(7824KiB/10023msec) 00:42:25.183 slat (nsec): min=6104, max=32860, avg=7242.03, stdev=1914.70 00:42:25.183 clat (usec): min=385, max=42529, avg=20475.49, stdev=20389.02 00:42:25.183 lat (usec): min=391, max=42536, avg=20482.74, stdev=20388.52 00:42:25.183 clat percentiles (usec): 00:42:25.183 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 486], 00:42:25.183 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 947], 60.00th=[40633], 00:42:25.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:42:25.183 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:25.183 | 99.99th=[42730] 00:42:25.183 bw ( KiB/s): min= 704, max= 896, per=50.43%, avg=780.80, stdev=49.14, samples=20 00:42:25.183 iops : min= 176, max= 224, avg=195.20, stdev=12.28, samples=20 00:42:25.183 lat (usec) : 500=23.93%, 750=24.74%, 1000=1.84% 00:42:25.183 lat (msec) : 2=0.61%, 50=48.88% 00:42:25.183 cpu : usr=96.98%, sys=2.78%, ctx=13, majf=0, minf=98 00:42:25.183 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.183 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.183 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:25.183 00:42:25.183 Run status group 0 (all jobs): 00:42:25.183 READ: bw=1547KiB/s (1584kB/s), 766KiB/s-781KiB/s (785kB/s-799kB/s), io=15.1MiB (15.9MB), run=10023-10023msec 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.183 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.184 00:42:25.184 real 0m11.276s 00:42:25.184 user 0m26.445s 00:42:25.184 sys 0m0.917s 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.184 22:51:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:25.184 ************************************ 00:42:25.184 END TEST fio_dif_1_multi_subsystems 00:42:25.184 ************************************ 00:42:25.184 22:51:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:25.184 22:51:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:25.184 22:51:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.184 22:51:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:25.443 ************************************ 00:42:25.443 START TEST fio_dif_rand_params 00:42:25.443 ************************************ 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:25.443 bdev_null0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:25.443 [2024-12-14 22:51:46.103505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:25.443 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:25.443 { 00:42:25.443 "params": { 00:42:25.443 "name": "Nvme$subsystem", 00:42:25.443 "trtype": "$TEST_TRANSPORT", 00:42:25.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:25.443 "adrfam": "ipv4", 00:42:25.443 "trsvcid": "$NVMF_PORT", 00:42:25.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:25.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:25.443 "hdgst": ${hdgst:-false}, 00:42:25.443 "ddgst": ${ddgst:-false} 00:42:25.443 }, 00:42:25.443 "method": "bdev_nvme_attach_controller" 00:42:25.443 } 00:42:25.443 EOF 00:42:25.443 )") 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:25.444 "params": { 00:42:25.444 "name": "Nvme0", 00:42:25.444 "trtype": "tcp", 00:42:25.444 "traddr": "10.0.0.2", 00:42:25.444 "adrfam": "ipv4", 00:42:25.444 "trsvcid": "4420", 00:42:25.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:25.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:25.444 "hdgst": false, 00:42:25.444 "ddgst": false 00:42:25.444 }, 00:42:25.444 "method": "bdev_nvme_attach_controller" 00:42:25.444 }' 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:25.444 22:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:25.702 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:25.702 ... 00:42:25.702 fio-3.35 00:42:25.702 Starting 3 threads 00:42:32.266 00:42:32.266 filename0: (groupid=0, jobs=1): err= 0: pid=639017: Sat Dec 14 22:51:51 2024 00:42:32.266 read: IOPS=321, BW=40.2MiB/s (42.1MB/s)(203MiB/5048msec) 00:42:32.266 slat (nsec): min=6279, max=49380, avg=13341.50, stdev=6095.68 00:42:32.266 clat (usec): min=3360, max=51186, avg=9287.20, stdev=5081.68 00:42:32.266 lat (usec): min=3367, max=51198, avg=9300.55, stdev=5081.54 00:42:32.266 clat percentiles (usec): 00:42:32.266 | 1.00th=[ 3621], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 7898], 00:42:32.266 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:42:32.266 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10552], 00:42:32.266 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50594], 99.95th=[51119], 00:42:32.266 | 99.99th=[51119] 00:42:32.266 bw ( KiB/s): min=30464, max=45312, per=35.28%, avg=41497.60, stdev=4401.83, samples=10 00:42:32.266 iops : min= 238, max= 354, avg=324.20, stdev=34.39, samples=10 00:42:32.266 lat (msec) : 4=1.48%, 10=86.14%, 20=10.78%, 50=1.42%, 100=0.18% 00:42:32.266 cpu : usr=95.72%, sys=3.98%, ctx=9, majf=0, minf=75 00:42:32.266 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.266 filename0: (groupid=0, jobs=1): err= 0: pid=639018: Sat Dec 14 22:51:51 2024 00:42:32.266 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(190MiB/5045msec) 00:42:32.266 slat (nsec): min=6241, max=62666, avg=13693.89, stdev=6305.71 00:42:32.266 clat (usec): min=3561, max=52515, avg=9911.94, stdev=4980.24 00:42:32.266 lat (usec): min=3567, max=52539, avg=9925.64, stdev=4980.11 00:42:32.266 clat percentiles (usec): 00:42:32.266 | 1.00th=[ 5800], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8455], 00:42:32.266 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:42:32.266 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11600], 00:42:32.266 | 99.00th=[47449], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:42:32.266 | 99.99th=[52691] 00:42:32.266 bw ( KiB/s): min=33792, max=41984, per=33.03%, avg=38853.90, stdev=3018.51, samples=10 00:42:32.266 iops : min= 264, max= 328, avg=303.50, stdev=23.66, samples=10 00:42:32.266 lat (msec) : 4=0.59%, 10=67.76%, 20=30.13%, 50=1.25%, 100=0.26% 00:42:32.266 cpu : usr=94.90%, sys=4.80%, ctx=9, majf=0, minf=24 00:42:32.266 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.266 filename0: (groupid=0, jobs=1): err= 0: pid=639019: Sat Dec 14 22:51:51 2024 00:42:32.266 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5043msec) 00:42:32.266 slat (nsec): min=6275, max=58183, avg=15732.58, stdev=7680.09 00:42:32.266 clat (usec): min=3215, max=50480, avg=10066.87, stdev=4070.57 00:42:32.266 lat (usec): min=3221, max=50509, avg=10082.60, stdev=4071.34 00:42:32.266 clat percentiles (usec): 00:42:32.266 | 1.00th=[ 3621], 5.00th=[ 5997], 10.00th=[ 6849], 20.00th=[ 8717], 00:42:32.266 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:42:32.266 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11731], 95.00th=[12256], 00:42:32.266 | 99.00th=[13960], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:42:32.266 | 99.99th=[50594] 00:42:32.266 bw ( KiB/s): min=33536, max=41984, per=32.51%, avg=38238.00, stdev=2782.93, samples=10 00:42:32.266 iops : min= 262, max= 328, avg=298.70, stdev=21.70, samples=10 00:42:32.266 lat (msec) : 4=2.14%, 10=47.33%, 20=49.60%, 50=0.80%, 100=0.13% 00:42:32.266 cpu : usr=94.05%, sys=5.08%, ctx=161, majf=0, minf=100 00:42:32.266 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.266 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.266 00:42:32.266 Run status group 0 (all jobs): 00:42:32.266 READ: bw=115MiB/s (120MB/s), 37.1MiB/s-40.2MiB/s (38.9MB/s-42.1MB/s), io=580MiB (608MB), run=5043-5048msec 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 bdev_null0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.266 [2024-12-14 22:51:52.191020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:32.266 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 bdev_null1 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 bdev_null2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:32.267 { 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme$subsystem", 00:42:32.267 "trtype": "$TEST_TRANSPORT", 00:42:32.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "$NVMF_PORT", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.267 "hdgst": ${hdgst:-false}, 00:42:32.267 "ddgst": ${ddgst:-false} 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 } 00:42:32.267 EOF 00:42:32.267 )") 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:32.267 { 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme$subsystem", 00:42:32.267 "trtype": "$TEST_TRANSPORT", 00:42:32.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "$NVMF_PORT", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.267 "hdgst": ${hdgst:-false}, 00:42:32.267 "ddgst": ${ddgst:-false} 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 } 00:42:32.267 EOF 00:42:32.267 )") 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:32.267 { 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme$subsystem", 00:42:32.267 "trtype": "$TEST_TRANSPORT", 00:42:32.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "$NVMF_PORT", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.267 "hdgst": ${hdgst:-false}, 00:42:32.267 "ddgst": ${ddgst:-false} 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 } 00:42:32.267 EOF 00:42:32.267 )") 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme0", 00:42:32.267 "trtype": "tcp", 00:42:32.267 "traddr": "10.0.0.2", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "4420", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:32.267 "hdgst": false, 00:42:32.267 "ddgst": false 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 },{ 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme1", 00:42:32.267 "trtype": "tcp", 00:42:32.267 "traddr": "10.0.0.2", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "4420", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:32.267 "hdgst": false, 00:42:32.267 "ddgst": false 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 },{ 00:42:32.267 "params": { 00:42:32.267 "name": "Nvme2", 00:42:32.267 "trtype": "tcp", 00:42:32.267 "traddr": "10.0.0.2", 00:42:32.267 "adrfam": "ipv4", 00:42:32.267 "trsvcid": "4420", 00:42:32.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:32.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:32.267 "hdgst": false, 00:42:32.267 "ddgst": false 00:42:32.267 }, 00:42:32.267 "method": "bdev_nvme_attach_controller" 00:42:32.267 }' 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.267 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:32.268 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:32.268 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:32.268 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:32.268 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:32.268 22:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:32.268 ... 00:42:32.268 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:32.268 ... 00:42:32.268 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:32.268 ... 00:42:32.268 fio-3.35 00:42:32.268 Starting 24 threads 00:42:44.473 00:42:44.473 filename0: (groupid=0, jobs=1): err= 0: pid=640134: Sat Dec 14 22:52:03 2024 00:42:44.473 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.1MiB/10001msec) 00:42:44.473 slat (usec): min=7, max=111, avg=24.45, stdev=13.77 00:42:44.473 clat (usec): min=1189, max=32077, avg=29391.36, stdev=4969.91 00:42:44.473 lat (usec): min=1201, max=32093, avg=29415.81, stdev=4971.41 00:42:44.473 clat percentiles (usec): 00:42:44.473 | 1.00th=[ 1549], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:42:44.473 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:42:44.473 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:42:44.473 | 99.00th=[31065], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:42:44.473 | 99.99th=[32113] 00:42:44.473 bw ( KiB/s): min= 2048, max= 3456, per=4.37%, avg=2162.53, stdev=318.99, samples=19 00:42:44.473 iops : min= 512, max= 864, avg=540.63, stdev=79.75, samples=19 00:42:44.473 lat (msec) : 2=2.03%, 4=0.44%, 10=0.48%, 20=0.89%, 50=96.15% 00:42:44.473 cpu : usr=98.36%, sys=1.11%, ctx=42, majf=0, minf=9 00:42:44.473 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:44.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.473 filename0: (groupid=0, jobs=1): err= 0: pid=640135: Sat Dec 14 22:52:03 2024 00:42:44.473 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10145msec) 00:42:44.473 slat (nsec): min=4824, max=96214, avg=41132.38, stdev=22035.65 00:42:44.473 clat (msec): min=20, max=196, avg=30.70, stdev= 9.22 00:42:44.473 lat (msec): min=21, max=196, avg=30.75, stdev= 9.23 00:42:44.473 clat percentiles (msec): 00:42:44.473 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.473 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.473 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.473 | 99.00th=[ 33], 99.50th=[ 39], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.473 | 99.99th=[ 197] 00:42:44.473 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.00, stdev=81.75, samples=20 00:42:44.473 iops : min= 480, max= 544, avg=520.00, stdev=20.44, samples=20 00:42:44.473 lat (msec) : 50=99.69%, 250=0.31% 00:42:44.473 cpu : usr=98.49%, sys=1.12%, ctx=17, majf=0, minf=9 00:42:44.473 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.473 filename0: (groupid=0, jobs=1): err= 0: pid=640136: Sat Dec 14 22:52:03 2024 00:42:44.473 read: IOPS=515, BW=2062KiB/s (2111kB/s)(20.4MiB/10151msec) 00:42:44.473 slat (usec): min=6, max=102, avg=37.21, stdev=18.50 00:42:44.473 clat (msec): min=19, max=196, avg=30.70, stdev= 9.22 00:42:44.473 lat (msec): min=19, max=196, avg=30.73, stdev= 9.22 00:42:44.473 clat percentiles (msec): 00:42:44.473 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.473 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.473 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.473 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.473 | 99.99th=[ 197] 00:42:44.473 bw ( KiB/s): min= 1920, max= 2176, per=4.22%, avg=2086.60, stdev=73.01, samples=20 00:42:44.473 iops : min= 480, max= 544, avg=521.65, stdev=18.25, samples=20 00:42:44.473 lat (msec) : 20=0.31%, 50=99.39%, 250=0.31% 00:42:44.473 cpu : usr=98.59%, sys=0.98%, ctx=22, majf=0, minf=9 00:42:44.473 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.473 filename0: (groupid=0, jobs=1): err= 0: pid=640137: Sat Dec 14 22:52:03 2024 00:42:44.473 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.6MiB/10180msec) 00:42:44.473 slat (nsec): min=7947, max=84486, avg=28039.78, stdev=13399.36 00:42:44.473 clat (msec): min=8, max=196, avg=30.60, stdev= 9.35 00:42:44.473 lat (msec): min=8, max=196, avg=30.63, stdev= 9.35 00:42:44.473 clat percentiles (msec): 00:42:44.473 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.473 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.473 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.473 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.473 | 99.99th=[ 197] 00:42:44.473 bw ( KiB/s): min= 2048, max= 2304, per=4.26%, avg=2105.60, stdev=77.42, samples=20 00:42:44.473 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:42:44.473 lat (msec) : 10=0.30%, 20=0.87%, 50=98.52%, 250=0.30% 00:42:44.473 cpu : usr=98.32%, sys=1.17%, ctx=97, majf=0, minf=9 00:42:44.473 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:44.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.473 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.473 filename0: (groupid=0, jobs=1): err= 0: pid=640138: Sat Dec 14 22:52:03 2024 00:42:44.473 read: IOPS=515, BW=2061KiB/s (2111kB/s)(20.4MiB/10152msec) 00:42:44.473 slat (usec): min=7, max=117, avg=42.48, stdev=21.56 00:42:44.474 clat (msec): min=19, max=196, avg=30.65, stdev= 9.22 00:42:44.474 lat (msec): min=19, max=197, avg=30.69, stdev= 9.22 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.474 | 99.99th=[ 197] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.22%, avg=2086.40, stdev=73.12, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=521.60, stdev=18.28, samples=20 00:42:44.474 lat (msec) : 20=0.27%, 50=99.43%, 250=0.31% 00:42:44.474 cpu : usr=98.69%, sys=0.92%, ctx=13, majf=0, minf=9 00:42:44.474 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename0: (groupid=0, jobs=1): err= 0: pid=640139: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10143msec) 00:42:44.474 slat (nsec): min=4706, max=44430, avg=20098.32, stdev=6736.11 00:42:44.474 clat (msec): min=19, max=156, avg=30.95, stdev= 7.37 00:42:44.474 lat (msec): min=20, max=156, avg=30.97, stdev= 7.37 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 41], 99.50th=[ 59], 99.90th=[ 157], 99.95th=[ 157], 00:42:44.474 | 99.99th=[ 157] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.00, stdev=78.21, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=520.00, stdev=19.55, samples=20 00:42:44.474 lat (msec) : 20=0.02%, 50=99.06%, 100=0.61%, 250=0.31% 00:42:44.474 cpu : usr=98.78%, sys=0.83%, ctx=12, majf=0, minf=9 00:42:44.474 IO depths : 1=4.4%, 2=10.6%, 4=24.8%, 8=52.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename0: (groupid=0, jobs=1): err= 0: pid=640140: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10144msec) 00:42:44.474 slat (nsec): min=4526, max=96061, avg=40232.92, stdev=21985.40 00:42:44.474 clat (msec): min=21, max=196, avg=30.71, stdev= 9.22 00:42:44.474 lat (msec): min=21, max=197, avg=30.75, stdev= 9.22 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 32], 99.50th=[ 36], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.474 | 99.99th=[ 197] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.00, stdev=70.42, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=520.00, stdev=17.60, samples=20 00:42:44.474 lat (msec) : 50=99.65%, 100=0.04%, 250=0.31% 00:42:44.474 cpu : usr=98.74%, sys=0.87%, ctx=13, majf=0, minf=9 00:42:44.474 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename0: (groupid=0, jobs=1): err= 0: pid=640141: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.6MiB/10180msec) 00:42:44.474 slat (nsec): min=7439, max=99883, avg=21180.67, stdev=17188.96 00:42:44.474 clat (msec): min=8, max=196, avg=30.68, stdev= 9.33 00:42:44.474 lat (msec): min=8, max=196, avg=30.70, stdev= 9.33 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.474 | 99.99th=[ 197] 00:42:44.474 bw ( KiB/s): min= 2048, max= 2304, per=4.26%, avg=2105.60, stdev=77.42, samples=20 00:42:44.474 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:42:44.474 lat (msec) : 10=0.30%, 20=0.91%, 50=98.48%, 250=0.30% 00:42:44.474 cpu : usr=98.57%, sys=1.03%, ctx=16, majf=0, minf=9 00:42:44.474 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename1: (groupid=0, jobs=1): err= 0: pid=640142: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10145msec) 00:42:44.474 slat (nsec): min=4501, max=39361, avg=17843.44, stdev=6360.96 00:42:44.474 clat (msec): min=19, max=159, avg=30.96, stdev= 7.28 00:42:44.474 lat (msec): min=19, max=159, avg=30.98, stdev= 7.28 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 41], 99.50th=[ 59], 99.90th=[ 157], 99.95th=[ 157], 00:42:44.474 | 99.99th=[ 159] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.15, stdev=81.44, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=520.00, stdev=20.44, samples=20 00:42:44.474 lat (msec) : 20=0.04%, 50=99.35%, 100=0.31%, 250=0.31% 00:42:44.474 cpu : usr=98.66%, sys=0.94%, ctx=13, majf=0, minf=9 00:42:44.474 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename1: (groupid=0, jobs=1): err= 0: pid=640143: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=514, BW=2057KiB/s (2107kB/s)(20.4MiB/10145msec) 00:42:44.474 slat (nsec): min=6067, max=86723, avg=20820.29, stdev=8506.63 00:42:44.474 clat (msec): min=19, max=199, avg=30.82, stdev= 7.84 00:42:44.474 lat (msec): min=19, max=199, avg=30.84, stdev= 7.84 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 42], 99.50th=[ 51], 99.90th=[ 201], 99.95th=[ 201], 00:42:44.474 | 99.99th=[ 201] 00:42:44.474 bw ( KiB/s): min= 1792, max= 2176, per=4.21%, avg=2081.00, stdev=91.43, samples=20 00:42:44.474 iops : min= 448, max= 544, avg=520.25, stdev=22.86, samples=20 00:42:44.474 lat (msec) : 20=0.19%, 50=99.39%, 100=0.11%, 250=0.31% 00:42:44.474 cpu : usr=98.61%, sys=1.00%, ctx=12, majf=0, minf=9 00:42:44.474 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename1: (groupid=0, jobs=1): err= 0: pid=640145: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=514, BW=2060KiB/s (2109kB/s)(20.4MiB/10141msec) 00:42:44.474 slat (usec): min=4, max=100, avg=40.40, stdev=22.15 00:42:44.474 clat (msec): min=19, max=196, avg=30.67, stdev= 9.30 00:42:44.474 lat (msec): min=19, max=196, avg=30.71, stdev= 9.30 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 32], 99.50th=[ 44], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.474 | 99.99th=[ 197] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2082.40, stdev=70.08, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=520.60, stdev=17.52, samples=20 00:42:44.474 lat (msec) : 20=0.11%, 50=99.54%, 100=0.04%, 250=0.31% 00:42:44.474 cpu : usr=98.47%, sys=1.13%, ctx=12, majf=0, minf=9 00:42:44.474 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename1: (groupid=0, jobs=1): err= 0: pid=640146: Sat Dec 14 22:52:03 2024 00:42:44.474 read: IOPS=515, BW=2061KiB/s (2111kB/s)(20.4MiB/10152msec) 00:42:44.474 slat (nsec): min=6253, max=96206, avg=40420.24, stdev=22053.59 00:42:44.474 clat (msec): min=19, max=196, avg=30.68, stdev= 9.20 00:42:44.474 lat (msec): min=19, max=196, avg=30.72, stdev= 9.20 00:42:44.474 clat percentiles (msec): 00:42:44.474 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.474 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.474 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.474 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.474 | 99.99th=[ 197] 00:42:44.474 bw ( KiB/s): min= 1920, max= 2176, per=4.22%, avg=2086.40, stdev=73.12, samples=20 00:42:44.474 iops : min= 480, max= 544, avg=521.60, stdev=18.28, samples=20 00:42:44.474 lat (msec) : 20=0.27%, 50=99.43%, 250=0.31% 00:42:44.474 cpu : usr=98.58%, sys=1.02%, ctx=13, majf=0, minf=9 00:42:44.474 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.474 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.474 filename1: (groupid=0, jobs=1): err= 0: pid=640147: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=514, BW=2059KiB/s (2109kB/s)(20.4MiB/10144msec) 00:42:44.475 slat (nsec): min=5640, max=99638, avg=29758.42, stdev=19040.83 00:42:44.475 clat (msec): min=18, max=198, avg=30.78, stdev= 9.34 00:42:44.475 lat (msec): min=18, max=198, avg=30.81, stdev= 9.34 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 50], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 199] 00:42:44.475 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2082.55, stdev=77.16, samples=20 00:42:44.475 iops : min= 480, max= 544, avg=520.60, stdev=19.35, samples=20 00:42:44.475 lat (msec) : 20=0.19%, 50=99.46%, 100=0.04%, 250=0.31% 00:42:44.475 cpu : usr=98.60%, sys=1.01%, ctx=12, majf=0, minf=9 00:42:44.475 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename1: (groupid=0, jobs=1): err= 0: pid=640148: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10145msec) 00:42:44.475 slat (nsec): min=4555, max=73942, avg=21218.30, stdev=6497.56 00:42:44.475 clat (msec): min=19, max=158, avg=30.93, stdev= 7.39 00:42:44.475 lat (msec): min=19, max=158, avg=30.96, stdev= 7.39 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 42], 99.50th=[ 59], 99.90th=[ 159], 99.95th=[ 159], 00:42:44.475 | 99.99th=[ 159] 00:42:44.475 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.15, stdev=81.44, samples=20 00:42:44.475 iops : min= 480, max= 544, avg=520.00, stdev=20.44, samples=20 00:42:44.475 lat (msec) : 20=0.04%, 50=99.04%, 100=0.61%, 250=0.31% 00:42:44.475 cpu : usr=98.49%, sys=1.12%, ctx=12, majf=0, minf=9 00:42:44.475 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename1: (groupid=0, jobs=1): err= 0: pid=640149: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=518, BW=2076KiB/s (2126kB/s)(20.6MiB/10174msec) 00:42:44.475 slat (nsec): min=7520, max=79014, avg=12024.68, stdev=7519.11 00:42:44.475 clat (msec): min=8, max=196, avg=30.72, stdev= 9.38 00:42:44.475 lat (msec): min=8, max=196, avg=30.74, stdev= 9.38 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 15], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 197] 00:42:44.475 bw ( KiB/s): min= 2048, max= 2432, per=4.26%, avg=2105.60, stdev=97.17, samples=20 00:42:44.475 iops : min= 512, max= 608, avg=526.40, stdev=24.29, samples=20 00:42:44.475 lat (msec) : 10=0.30%, 20=1.21%, 50=98.18%, 250=0.30% 00:42:44.475 cpu : usr=98.57%, sys=1.01%, ctx=12, majf=0, minf=9 00:42:44.475 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename1: (groupid=0, jobs=1): err= 0: pid=640150: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.6MiB/10180msec) 00:42:44.475 slat (nsec): min=10192, max=99836, avg=30875.43, stdev=18645.07 00:42:44.475 clat (msec): min=8, max=196, avg=30.56, stdev= 9.35 00:42:44.475 lat (msec): min=8, max=196, avg=30.59, stdev= 9.35 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 197] 00:42:44.475 bw ( KiB/s): min= 2048, max= 2304, per=4.26%, avg=2105.60, stdev=77.42, samples=20 00:42:44.475 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:42:44.475 lat (msec) : 10=0.30%, 20=0.91%, 50=98.48%, 250=0.30% 00:42:44.475 cpu : usr=98.60%, sys=0.98%, ctx=11, majf=0, minf=9 00:42:44.475 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename2: (groupid=0, jobs=1): err= 0: pid=640151: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.6MiB/10180msec) 00:42:44.475 slat (nsec): min=9809, max=99826, avg=32324.44, stdev=18406.83 00:42:44.475 clat (msec): min=8, max=196, avg=30.55, stdev= 9.36 00:42:44.475 lat (msec): min=8, max=196, avg=30.58, stdev= 9.36 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 197] 00:42:44.475 bw ( KiB/s): min= 2048, max= 2304, per=4.26%, avg=2105.60, stdev=77.42, samples=20 00:42:44.475 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:42:44.475 lat (msec) : 10=0.30%, 20=0.91%, 50=98.48%, 250=0.30% 00:42:44.475 cpu : usr=98.69%, sys=0.90%, ctx=13, majf=0, minf=9 00:42:44.475 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename2: (groupid=0, jobs=1): err= 0: pid=640152: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=515, BW=2061KiB/s (2111kB/s)(20.4MiB/10152msec) 00:42:44.475 slat (nsec): min=7609, max=77540, avg=28858.88, stdev=15210.42 00:42:44.475 clat (msec): min=19, max=196, avg=30.82, stdev= 9.21 00:42:44.475 lat (msec): min=19, max=196, avg=30.85, stdev= 9.21 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 197] 00:42:44.475 bw ( KiB/s): min= 1920, max= 2176, per=4.22%, avg=2086.40, stdev=73.12, samples=20 00:42:44.475 iops : min= 480, max= 544, avg=521.60, stdev=18.28, samples=20 00:42:44.475 lat (msec) : 20=0.27%, 50=99.43%, 250=0.31% 00:42:44.475 cpu : usr=98.70%, sys=0.94%, ctx=15, majf=0, minf=10 00:42:44.475 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename2: (groupid=0, jobs=1): err= 0: pid=640153: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.6MiB/10180msec) 00:42:44.475 slat (nsec): min=7599, max=99778, avg=29006.38, stdev=19963.56 00:42:44.475 clat (msec): min=9, max=196, avg=30.58, stdev= 9.29 00:42:44.475 lat (msec): min=9, max=196, avg=30.61, stdev= 9.29 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 19], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.475 | 99.99th=[ 197] 00:42:44.475 bw ( KiB/s): min= 2048, max= 2304, per=4.26%, avg=2105.60, stdev=77.42, samples=20 00:42:44.475 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:42:44.475 lat (msec) : 10=0.13%, 20=1.34%, 50=98.22%, 250=0.30% 00:42:44.475 cpu : usr=98.49%, sys=1.10%, ctx=12, majf=0, minf=9 00:42:44.475 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.475 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.475 filename2: (groupid=0, jobs=1): err= 0: pid=640154: Sat Dec 14 22:52:03 2024 00:42:44.475 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10143msec) 00:42:44.475 slat (nsec): min=4498, max=40871, avg=20789.85, stdev=5702.43 00:42:44.475 clat (msec): min=20, max=156, avg=30.93, stdev= 7.25 00:42:44.475 lat (msec): min=20, max=156, avg=30.95, stdev= 7.25 00:42:44.475 clat percentiles (msec): 00:42:44.475 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.475 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.475 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.475 | 99.00th=[ 41], 99.50th=[ 59], 99.90th=[ 157], 99.95th=[ 157], 00:42:44.475 | 99.99th=[ 157] 00:42:44.475 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.00, stdev=80.59, samples=20 00:42:44.475 iops : min= 480, max= 544, avg=520.00, stdev=20.15, samples=20 00:42:44.475 lat (msec) : 50=99.08%, 100=0.61%, 250=0.31% 00:42:44.475 cpu : usr=98.58%, sys=1.02%, ctx=15, majf=0, minf=9 00:42:44.475 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:44.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.475 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.476 filename2: (groupid=0, jobs=1): err= 0: pid=640155: Sat Dec 14 22:52:03 2024 00:42:44.476 read: IOPS=513, BW=2053KiB/s (2103kB/s)(20.3MiB/10142msec) 00:42:44.476 slat (nsec): min=5694, max=93427, avg=22153.27, stdev=9173.41 00:42:44.476 clat (msec): min=14, max=196, avg=30.98, stdev= 9.32 00:42:44.476 lat (msec): min=14, max=196, avg=31.00, stdev= 9.32 00:42:44.476 clat percentiles (msec): 00:42:44.476 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:42:44.476 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.476 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.476 | 99.00th=[ 41], 99.50th=[ 51], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.476 | 99.99th=[ 197] 00:42:44.476 bw ( KiB/s): min= 1923, max= 2176, per=4.20%, avg=2076.15, stdev=74.16, samples=20 00:42:44.476 iops : min= 480, max= 544, avg=519.00, stdev=18.62, samples=20 00:42:44.476 lat (msec) : 20=0.19%, 50=99.39%, 100=0.12%, 250=0.31% 00:42:44.476 cpu : usr=98.49%, sys=1.10%, ctx=15, majf=0, minf=9 00:42:44.476 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:44.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.476 filename2: (groupid=0, jobs=1): err= 0: pid=640156: Sat Dec 14 22:52:03 2024 00:42:44.476 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.4MiB/10144msec) 00:42:44.476 slat (usec): min=4, max=100, avg=40.95, stdev=21.70 00:42:44.476 clat (msec): min=21, max=196, avg=30.71, stdev= 9.22 00:42:44.476 lat (msec): min=21, max=196, avg=30.75, stdev= 9.22 00:42:44.476 clat percentiles (msec): 00:42:44.476 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.476 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.476 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.476 | 99.00th=[ 32], 99.50th=[ 36], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.476 | 99.99th=[ 197] 00:42:44.476 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2080.00, stdev=70.42, samples=20 00:42:44.476 iops : min= 480, max= 544, avg=520.00, stdev=17.60, samples=20 00:42:44.476 lat (msec) : 50=99.65%, 100=0.04%, 250=0.31% 00:42:44.476 cpu : usr=98.68%, sys=0.91%, ctx=12, majf=0, minf=9 00:42:44.476 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:44.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.476 filename2: (groupid=0, jobs=1): err= 0: pid=640157: Sat Dec 14 22:52:03 2024 00:42:44.476 read: IOPS=512, BW=2051KiB/s (2100kB/s)(20.3MiB/10142msec) 00:42:44.476 slat (usec): min=7, max=110, avg=38.51, stdev=21.86 00:42:44.476 clat (msec): min=28, max=196, avg=30.83, stdev= 9.46 00:42:44.476 lat (msec): min=28, max=196, avg=30.87, stdev= 9.46 00:42:44.476 clat percentiles (msec): 00:42:44.476 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:42:44.476 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.476 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.476 | 99.00th=[ 32], 99.50th=[ 70], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.476 | 99.99th=[ 197] 00:42:44.476 bw ( KiB/s): min= 1792, max= 2176, per=4.19%, avg=2073.60, stdev=98.27, samples=20 00:42:44.476 iops : min= 448, max= 544, avg=518.40, stdev=24.57, samples=20 00:42:44.476 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:42:44.476 cpu : usr=98.54%, sys=1.05%, ctx=19, majf=0, minf=9 00:42:44.476 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:44.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.476 filename2: (groupid=0, jobs=1): err= 0: pid=640158: Sat Dec 14 22:52:03 2024 00:42:44.476 read: IOPS=512, BW=2051KiB/s (2101kB/s)(20.3MiB/10136msec) 00:42:44.476 slat (nsec): min=7920, max=95299, avg=47855.22, stdev=21560.26 00:42:44.476 clat (msec): min=21, max=197, avg=30.73, stdev= 9.38 00:42:44.476 lat (msec): min=21, max=197, avg=30.78, stdev= 9.38 00:42:44.476 clat percentiles (msec): 00:42:44.476 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:42:44.476 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:42:44.476 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:42:44.476 | 99.00th=[ 33], 99.50th=[ 60], 99.90th=[ 197], 99.95th=[ 197], 00:42:44.476 | 99.99th=[ 197] 00:42:44.476 bw ( KiB/s): min= 1904, max= 2176, per=4.19%, avg=2072.80, stdev=80.50, samples=20 00:42:44.476 iops : min= 476, max= 544, avg=518.20, stdev=20.12, samples=20 00:42:44.476 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:42:44.476 cpu : usr=98.71%, sys=0.87%, ctx=23, majf=0, minf=9 00:42:44.476 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:44.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.476 issued rwts: total=5198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:44.476 00:42:44.476 Run status group 0 (all jobs): 00:42:44.476 READ: bw=48.3MiB/s (50.6MB/s), 2051KiB/s-2163KiB/s (2100kB/s-2215kB/s), io=491MiB (515MB), run=10001-10180msec 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 bdev_null0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:44.476 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.477 [2024-12-14 22:52:04.127506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.477 bdev_null1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.477 { 00:42:44.477 "params": { 00:42:44.477 "name": "Nvme$subsystem", 00:42:44.477 "trtype": "$TEST_TRANSPORT", 00:42:44.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.477 "adrfam": "ipv4", 00:42:44.477 "trsvcid": "$NVMF_PORT", 00:42:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.477 "hdgst": ${hdgst:-false}, 00:42:44.477 "ddgst": ${ddgst:-false} 00:42:44.477 }, 00:42:44.477 "method": "bdev_nvme_attach_controller" 00:42:44.477 } 00:42:44.477 EOF 00:42:44.477 )") 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.477 { 00:42:44.477 "params": { 00:42:44.477 "name": "Nvme$subsystem", 00:42:44.477 "trtype": "$TEST_TRANSPORT", 00:42:44.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.477 "adrfam": "ipv4", 00:42:44.477 "trsvcid": "$NVMF_PORT", 00:42:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.477 "hdgst": ${hdgst:-false}, 00:42:44.477 "ddgst": ${ddgst:-false} 00:42:44.477 }, 00:42:44.477 "method": "bdev_nvme_attach_controller" 00:42:44.477 } 00:42:44.477 EOF 00:42:44.477 )") 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:44.477 "params": { 00:42:44.477 "name": "Nvme0", 00:42:44.477 "trtype": "tcp", 00:42:44.477 "traddr": "10.0.0.2", 00:42:44.477 "adrfam": "ipv4", 00:42:44.477 "trsvcid": "4420", 00:42:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.477 "hdgst": false, 00:42:44.477 "ddgst": false 00:42:44.477 }, 00:42:44.477 "method": "bdev_nvme_attach_controller" 00:42:44.477 },{ 00:42:44.477 "params": { 00:42:44.477 "name": "Nvme1", 00:42:44.477 "trtype": "tcp", 00:42:44.477 "traddr": "10.0.0.2", 00:42:44.477 "adrfam": "ipv4", 00:42:44.477 "trsvcid": "4420", 00:42:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:44.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:44.477 "hdgst": false, 00:42:44.477 "ddgst": false 00:42:44.477 }, 00:42:44.477 "method": "bdev_nvme_attach_controller" 00:42:44.477 }' 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:44.477 22:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.477 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:44.477 ... 00:42:44.477 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:44.477 ... 00:42:44.477 fio-3.35 00:42:44.477 Starting 4 threads 00:42:49.747 00:42:49.747 filename0: (groupid=0, jobs=1): err= 0: pid=642102: Sat Dec 14 22:52:10 2024 00:42:49.747 read: IOPS=2701, BW=21.1MiB/s (22.1MB/s)(106MiB/5001msec) 00:42:49.747 slat (nsec): min=6143, max=52330, avg=9174.48, stdev=3562.66 00:42:49.747 clat (usec): min=740, max=5843, avg=2933.94, stdev=480.00 00:42:49.747 lat (usec): min=750, max=5863, avg=2943.11, stdev=479.80 00:42:49.747 clat percentiles (usec): 00:42:49.747 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2573], 00:42:49.747 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 2966], 00:42:49.747 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3523], 95.00th=[ 3818], 00:42:49.747 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5407], 00:42:49.747 | 99.99th=[ 5538] 00:42:49.747 bw ( KiB/s): min=21200, max=23216, per=25.47%, avg=21651.56, stdev=621.47, samples=9 00:42:49.747 iops : min= 2650, max= 2902, avg=2706.44, stdev=77.68, samples=9 00:42:49.747 lat (usec) : 750=0.01%, 1000=0.01% 00:42:49.747 lat (msec) : 2=1.27%, 4=95.06%, 10=3.66% 00:42:49.747 cpu : usr=96.28%, sys=3.42%, ctx=8, majf=0, minf=9 00:42:49.747 IO depths : 1=0.2%, 2=5.4%, 4=65.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 issued rwts: total=13512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:49.747 filename0: (groupid=0, jobs=1): err= 0: pid=642103: Sat Dec 14 22:52:10 2024 00:42:49.747 read: IOPS=2500, BW=19.5MiB/s (20.5MB/s)(97.7MiB/5001msec) 00:42:49.747 slat (nsec): min=6135, max=52344, avg=8883.03, stdev=3626.26 00:42:49.747 clat (usec): min=1320, max=5513, avg=3173.16, stdev=514.62 00:42:49.747 lat (usec): min=1332, max=5519, avg=3182.04, stdev=514.24 00:42:49.747 clat percentiles (usec): 00:42:49.747 | 1.00th=[ 2245], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2868], 00:42:49.747 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3097], 00:42:49.747 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3818], 95.00th=[ 4293], 00:42:49.747 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5342], 00:42:49.747 | 99.99th=[ 5538] 00:42:49.747 bw ( KiB/s): min=19104, max=20784, per=23.49%, avg=19969.78, stdev=602.84, samples=9 00:42:49.747 iops : min= 2388, max= 2598, avg=2496.22, stdev=75.36, samples=9 00:42:49.747 lat (msec) : 2=0.29%, 4=91.43%, 10=8.28% 00:42:49.747 cpu : usr=95.94%, sys=3.72%, ctx=8, majf=0, minf=10 00:42:49.747 IO depths : 1=0.3%, 2=2.0%, 4=70.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 issued rwts: total=12506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:49.747 filename1: (groupid=0, jobs=1): err= 0: pid=642104: Sat Dec 14 22:52:10 2024 00:42:49.747 read: IOPS=2926, BW=22.9MiB/s (24.0MB/s)(114MiB/5002msec) 00:42:49.747 slat (nsec): min=6157, max=54673, avg=9505.94, stdev=4687.29 00:42:49.747 clat (usec): min=621, max=5560, avg=2703.35, stdev=404.24 00:42:49.747 lat (usec): min=644, max=5585, avg=2712.86, stdev=404.55 00:42:49.747 clat percentiles (usec): 00:42:49.747 | 1.00th=[ 1680], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2409], 00:42:49.747 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2769], 00:42:49.747 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3326], 00:42:49.747 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4621], 99.95th=[ 4752], 00:42:49.747 | 99.99th=[ 4883] 00:42:49.747 bw ( KiB/s): min=22304, max=24336, per=27.70%, avg=23543.11, stdev=639.00, samples=9 00:42:49.747 iops : min= 2788, max= 3042, avg=2942.89, stdev=79.88, samples=9 00:42:49.747 lat (usec) : 750=0.03%, 1000=0.04% 00:42:49.747 lat (msec) : 2=2.92%, 4=96.17%, 10=0.83% 00:42:49.747 cpu : usr=94.60%, sys=4.86%, ctx=25, majf=0, minf=9 00:42:49.747 IO depths : 1=0.6%, 2=11.1%, 4=60.2%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 issued rwts: total=14636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:49.747 filename1: (groupid=0, jobs=1): err= 0: pid=642105: Sat Dec 14 22:52:10 2024 00:42:49.747 read: IOPS=2498, BW=19.5MiB/s (20.5MB/s)(97.6MiB/5002msec) 00:42:49.747 slat (nsec): min=6146, max=52591, avg=8998.70, stdev=3709.90 00:42:49.747 clat (usec): min=546, max=5560, avg=3176.35, stdev=456.50 00:42:49.747 lat (usec): min=557, max=5567, avg=3185.34, stdev=456.24 00:42:49.747 clat percentiles (usec): 00:42:49.747 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2900], 00:42:49.747 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3163], 00:42:49.747 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3720], 95.00th=[ 4178], 00:42:49.747 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5276], 00:42:49.747 | 99.99th=[ 5538] 00:42:49.747 bw ( KiB/s): min=18992, max=21136, per=23.51%, avg=19983.10, stdev=663.84, samples=10 00:42:49.747 iops : min= 2374, max= 2642, avg=2497.80, stdev=83.03, samples=10 00:42:49.747 lat (usec) : 750=0.02% 00:42:49.747 lat (msec) : 2=0.34%, 4=93.16%, 10=6.48% 00:42:49.747 cpu : usr=96.08%, sys=3.60%, ctx=7, majf=0, minf=9 00:42:49.747 IO depths : 1=0.1%, 2=1.7%, 4=71.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.747 issued rwts: total=12495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:49.747 00:42:49.747 Run status group 0 (all jobs): 00:42:49.747 READ: bw=83.0MiB/s (87.0MB/s), 19.5MiB/s-22.9MiB/s (20.5MB/s-24.0MB/s), io=415MiB (435MB), run=5001-5002msec 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.747 00:42:49.747 real 0m24.508s 00:42:49.747 user 4m55.781s 00:42:49.747 sys 0m4.972s 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:49.747 22:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:49.747 ************************************ 00:42:49.748 END TEST fio_dif_rand_params 00:42:49.748 ************************************ 00:42:49.748 22:52:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:49.748 22:52:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:49.748 22:52:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:49.748 22:52:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:50.007 ************************************ 00:42:50.007 START TEST fio_dif_digest 00:42:50.007 ************************************ 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.007 bdev_null0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.007 [2024-12-14 22:52:10.678956] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:50.007 { 00:42:50.007 "params": { 00:42:50.007 "name": "Nvme$subsystem", 00:42:50.007 "trtype": "$TEST_TRANSPORT", 00:42:50.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:50.007 "adrfam": "ipv4", 00:42:50.007 "trsvcid": "$NVMF_PORT", 00:42:50.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:50.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:50.007 "hdgst": ${hdgst:-false}, 00:42:50.007 "ddgst": ${ddgst:-false} 00:42:50.007 }, 00:42:50.007 "method": "bdev_nvme_attach_controller" 00:42:50.007 } 00:42:50.007 EOF 00:42:50.007 )") 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:50.007 22:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:50.007 "params": { 00:42:50.007 "name": "Nvme0", 00:42:50.007 "trtype": "tcp", 00:42:50.007 "traddr": "10.0.0.2", 00:42:50.008 "adrfam": "ipv4", 00:42:50.008 "trsvcid": "4420", 00:42:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:50.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:50.008 "hdgst": true, 00:42:50.008 "ddgst": true 00:42:50.008 }, 00:42:50.008 "method": "bdev_nvme_attach_controller" 00:42:50.008 }' 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:50.008 22:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.267 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:50.267 ... 00:42:50.267 fio-3.35 00:42:50.267 Starting 3 threads 00:43:02.471 00:43:02.471 filename0: (groupid=0, jobs=1): err= 0: pid=643204: Sat Dec 14 22:52:21 2024 00:43:02.471 read: IOPS=291, BW=36.4MiB/s (38.1MB/s)(365MiB/10044msec) 00:43:02.471 slat (nsec): min=6473, max=61145, avg=14673.51, stdev=5215.39 00:43:02.471 clat (usec): min=5415, max=49980, avg=10277.80, stdev=1242.42 00:43:02.471 lat (usec): min=5425, max=49987, avg=10292.48, stdev=1241.81 00:43:02.471 clat percentiles (usec): 00:43:02.471 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:43:02.471 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:43:02.471 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:43:02.471 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13566], 99.95th=[47449], 00:43:02.471 | 99.99th=[50070] 00:43:02.471 bw ( KiB/s): min=35584, max=38400, per=35.39%, avg=37388.80, stdev=861.06, samples=20 00:43:02.471 iops : min= 278, max= 300, avg=292.10, stdev= 6.73, samples=20 00:43:02.471 lat (msec) : 10=36.95%, 20=62.98%, 50=0.07% 00:43:02.471 cpu : usr=95.20%, sys=4.50%, ctx=32, majf=0, minf=70 00:43:02.471 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.471 issued rwts: total=2923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.471 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.471 filename0: (groupid=0, jobs=1): err= 0: pid=643205: Sat Dec 14 22:52:21 2024 00:43:02.471 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(330MiB/10044msec) 00:43:02.471 slat (nsec): min=6495, max=45048, avg=14385.40, stdev=6287.68 00:43:02.471 clat (usec): min=8512, max=49675, avg=11398.25, stdev=1237.02 00:43:02.471 lat (usec): min=8524, max=49687, avg=11412.63, stdev=1237.04 00:43:02.471 clat percentiles (usec): 00:43:02.471 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:43:02.471 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:43:02.471 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:43:02.471 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14615], 99.95th=[45876], 00:43:02.471 | 99.99th=[49546] 00:43:02.471 bw ( KiB/s): min=33024, max=34304, per=31.91%, avg=33715.20, stdev=362.99, samples=20 00:43:02.471 iops : min= 258, max= 268, avg=263.40, stdev= 2.84, samples=20 00:43:02.471 lat (msec) : 10=2.39%, 20=97.53%, 50=0.08% 00:43:02.471 cpu : usr=95.40%, sys=4.30%, ctx=23, majf=0, minf=63 00:43:02.471 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.471 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.471 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.471 filename0: (groupid=0, jobs=1): err= 0: pid=643206: Sat Dec 14 22:52:21 2024 00:43:02.471 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10043msec) 00:43:02.471 slat (nsec): min=6524, max=44537, avg=14287.64, stdev=6291.78 00:43:02.471 clat (usec): min=8552, max=46358, avg=11000.75, stdev=1188.38 00:43:02.471 lat (usec): min=8565, max=46370, avg=11015.04, stdev=1188.75 00:43:02.471 clat percentiles (usec): 00:43:02.471 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:43:02.471 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:43:02.471 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:43:02.472 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13698], 99.95th=[44827], 00:43:02.472 | 99.99th=[46400] 00:43:02.472 bw ( KiB/s): min=33792, max=36352, per=33.06%, avg=34931.20, stdev=707.09, samples=20 00:43:02.472 iops : min= 264, max= 284, avg=272.90, stdev= 5.52, samples=20 00:43:02.472 lat (msec) : 10=8.42%, 20=91.50%, 50=0.07% 00:43:02.472 cpu : usr=95.32%, sys=4.38%, ctx=16, majf=0, minf=92 00:43:02.472 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.472 issued rwts: total=2731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.472 00:43:02.472 Run status group 0 (all jobs): 00:43:02.472 READ: bw=103MiB/s (108MB/s), 32.8MiB/s-36.4MiB/s (34.4MB/s-38.1MB/s), io=1036MiB (1087MB), run=10043-10044msec 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.472 00:43:02.472 real 0m11.166s 00:43:02.472 user 0m35.724s 00:43:02.472 sys 0m1.607s 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.472 22:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.472 ************************************ 00:43:02.472 END TEST fio_dif_digest 00:43:02.472 ************************************ 00:43:02.472 22:52:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:02.472 22:52:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:02.472 rmmod nvme_tcp 00:43:02.472 rmmod nvme_fabrics 00:43:02.472 rmmod nvme_keyring 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 634945 ']' 00:43:02.472 22:52:21 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 634945 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 634945 ']' 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 634945 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634945 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634945' 00:43:02.472 killing process with pid 634945 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 634945 00:43:02.472 22:52:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 634945 00:43:02.472 22:52:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:02.472 22:52:22 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:04.378 Waiting for block devices as requested 00:43:04.378 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:04.378 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:04.378 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:04.378 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:04.378 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:04.378 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:04.638 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:04.638 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:04.638 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:04.897 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:04.897 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:04.897 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:05.156 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:05.156 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:05.156 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:05.156 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:05.416 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:05.416 22:52:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:05.416 22:52:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:05.416 22:52:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.952 22:52:28 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:07.952 00:43:07.952 real 1m14.011s 00:43:07.952 user 7m13.243s 00:43:07.952 sys 0m20.423s 00:43:07.952 22:52:28 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.952 22:52:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:07.952 ************************************ 00:43:07.952 END TEST nvmf_dif 00:43:07.952 ************************************ 00:43:07.952 22:52:28 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:07.952 22:52:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:07.952 22:52:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:07.952 22:52:28 -- common/autotest_common.sh@10 -- # set +x 00:43:07.952 ************************************ 00:43:07.952 START TEST nvmf_abort_qd_sizes 00:43:07.952 ************************************ 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:07.952 * Looking for test storage... 00:43:07.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:07.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.952 --rc genhtml_branch_coverage=1 00:43:07.952 --rc genhtml_function_coverage=1 00:43:07.952 --rc genhtml_legend=1 00:43:07.952 --rc geninfo_all_blocks=1 00:43:07.952 --rc geninfo_unexecuted_blocks=1 00:43:07.952 00:43:07.952 ' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:07.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.952 --rc genhtml_branch_coverage=1 00:43:07.952 --rc genhtml_function_coverage=1 00:43:07.952 --rc genhtml_legend=1 00:43:07.952 --rc geninfo_all_blocks=1 00:43:07.952 --rc geninfo_unexecuted_blocks=1 00:43:07.952 00:43:07.952 ' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:07.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.952 --rc genhtml_branch_coverage=1 00:43:07.952 --rc genhtml_function_coverage=1 00:43:07.952 --rc genhtml_legend=1 00:43:07.952 --rc geninfo_all_blocks=1 00:43:07.952 --rc geninfo_unexecuted_blocks=1 00:43:07.952 00:43:07.952 ' 00:43:07.952 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:07.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.953 --rc genhtml_branch_coverage=1 00:43:07.953 --rc genhtml_function_coverage=1 00:43:07.953 --rc genhtml_legend=1 00:43:07.953 --rc geninfo_all_blocks=1 00:43:07.953 --rc geninfo_unexecuted_blocks=1 00:43:07.953 00:43:07.953 ' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:07.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:07.953 22:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:14.519 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:14.520 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:14.520 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:14.520 Found net devices under 0000:af:00.0: cvl_0_0 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:14.520 Found net devices under 0000:af:00.1: cvl_0_1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:14.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:14.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:43:14.520 00:43:14.520 --- 10.0.0.2 ping statistics --- 00:43:14.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.520 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:14.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:14.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:43:14.520 00:43:14.520 --- 10.0.0.1 ping statistics --- 00:43:14.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.520 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:14.520 22:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:16.426 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:16.426 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:17.363 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:17.363 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=650976 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 650976 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 650976 ']' 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:17.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:17.622 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.622 [2024-12-14 22:52:38.302037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:17.622 [2024-12-14 22:52:38.302080] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.622 [2024-12-14 22:52:38.386264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:17.622 [2024-12-14 22:52:38.410517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:17.622 [2024-12-14 22:52:38.410556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:17.622 [2024-12-14 22:52:38.410564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:17.622 [2024-12-14 22:52:38.410570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:17.622 [2024-12-14 22:52:38.410575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:17.622 [2024-12-14 22:52:38.411991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.622 [2024-12-14 22:52:38.412101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:17.622 [2024-12-14 22:52:38.412115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:17.622 [2024-12-14 22:52:38.412122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:17.881 22:52:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.881 ************************************ 00:43:17.881 START TEST spdk_target_abort 00:43:17.881 ************************************ 00:43:17.881 22:52:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:17.881 22:52:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:17.881 22:52:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:17.881 22:52:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.881 22:52:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.168 spdk_targetn1 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.168 [2024-12-14 22:52:41.431202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.168 [2024-12-14 22:52:41.479571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:21.168 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:21.169 22:52:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:24.463 Initializing NVMe Controllers 00:43:24.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:24.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:24.463 Initialization complete. Launching workers. 00:43:24.463 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15023, failed: 0 00:43:24.463 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 13802 00:43:24.463 success 671, unsuccessful 550, failed 0 00:43:24.463 22:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:24.463 22:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:27.749 Initializing NVMe Controllers 00:43:27.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:27.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:27.749 Initialization complete. Launching workers. 00:43:27.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8552, failed: 0 00:43:27.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7336 00:43:27.749 success 328, unsuccessful 888, failed 0 00:43:27.749 22:52:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:27.749 22:52:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:31.083 Initializing NVMe Controllers 00:43:31.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:31.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:31.083 Initialization complete. Launching workers. 00:43:31.083 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38049, failed: 0 00:43:31.083 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2862, failed to submit 35187 00:43:31.083 success 568, unsuccessful 2294, failed 0 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.083 22:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 650976 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 650976 ']' 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 650976 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650976 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650976' 00:43:31.752 killing process with pid 650976 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 650976 00:43:31.752 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 650976 00:43:32.169 00:43:32.169 real 0m14.170s 00:43:32.169 user 0m54.331s 00:43:32.169 sys 0m2.276s 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:32.169 ************************************ 00:43:32.169 END TEST spdk_target_abort 00:43:32.169 ************************************ 00:43:32.169 22:52:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:32.169 22:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:32.169 22:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:32.169 22:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:32.169 ************************************ 00:43:32.169 START TEST kernel_target_abort 00:43:32.169 ************************************ 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:32.169 22:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:34.701 Waiting for block devices as requested 00:43:34.701 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:34.959 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:34.959 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:34.959 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:35.218 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:35.218 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:35.218 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:35.477 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:35.477 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:35.477 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:35.736 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:35.736 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:35.736 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:35.736 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:35.995 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:35.995 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:35.995 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:36.254 No valid GPT data, bailing 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:36.254 22:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:36.254 00:43:36.254 Discovery Log Number of Records 2, Generation counter 2 00:43:36.254 =====Discovery Log Entry 0====== 00:43:36.254 trtype: tcp 00:43:36.254 adrfam: ipv4 00:43:36.254 subtype: current discovery subsystem 00:43:36.254 treq: not specified, sq flow control disable supported 00:43:36.254 portid: 1 00:43:36.254 trsvcid: 4420 00:43:36.254 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:36.254 traddr: 10.0.0.1 00:43:36.254 eflags: none 00:43:36.254 sectype: none 00:43:36.254 =====Discovery Log Entry 1====== 00:43:36.254 trtype: tcp 00:43:36.254 adrfam: ipv4 00:43:36.254 subtype: nvme subsystem 00:43:36.254 treq: not specified, sq flow control disable supported 00:43:36.254 portid: 1 00:43:36.254 trsvcid: 4420 00:43:36.254 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:36.254 traddr: 10.0.0.1 00:43:36.254 eflags: none 00:43:36.254 sectype: none 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:36.254 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.255 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:36.255 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:36.255 22:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:39.543 Initializing NVMe Controllers 00:43:39.543 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:39.543 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:39.543 Initialization complete. Launching workers. 00:43:39.543 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95870, failed: 0 00:43:39.543 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95870, failed to submit 0 00:43:39.543 success 0, unsuccessful 95870, failed 0 00:43:39.543 22:53:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:39.543 22:53:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:42.832 Initializing NVMe Controllers 00:43:42.832 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:42.832 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:42.832 Initialization complete. Launching workers. 00:43:42.832 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150952, failed: 0 00:43:42.832 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37858, failed to submit 113094 00:43:42.832 success 0, unsuccessful 37858, failed 0 00:43:42.832 22:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:42.832 22:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:46.121 Initializing NVMe Controllers 00:43:46.121 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:46.121 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:46.121 Initialization complete. Launching workers. 00:43:46.121 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143441, failed: 0 00:43:46.121 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35890, failed to submit 107551 00:43:46.121 success 0, unsuccessful 35890, failed 0 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:46.121 22:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:48.658 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:48.658 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:49.596 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:49.596 00:43:49.596 real 0m17.439s 00:43:49.596 user 0m9.129s 00:43:49.596 sys 0m5.016s 00:43:49.596 22:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:49.596 22:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:49.596 ************************************ 00:43:49.596 END TEST kernel_target_abort 00:43:49.596 ************************************ 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:49.596 rmmod nvme_tcp 00:43:49.596 rmmod nvme_fabrics 00:43:49.596 rmmod nvme_keyring 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 650976 ']' 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 650976 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 650976 ']' 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 650976 00:43:49.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (650976) - No such process 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 650976 is not found' 00:43:49.596 Process with pid 650976 is not found 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:49.596 22:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:52.131 Waiting for block devices as requested 00:43:52.391 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:52.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:52.650 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:52.650 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:52.650 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:52.650 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:52.908 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:52.908 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:52.908 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:53.167 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:53.167 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:53.167 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:53.167 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:53.426 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:53.426 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:53.426 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:53.685 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:53.685 22:53:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:56.219 22:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:56.219 00:43:56.219 real 0m48.180s 00:43:56.219 user 1m7.743s 00:43:56.219 sys 0m16.040s 00:43:56.219 22:53:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.219 22:53:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:56.219 ************************************ 00:43:56.219 END TEST nvmf_abort_qd_sizes 00:43:56.219 ************************************ 00:43:56.219 22:53:16 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.219 22:53:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:56.219 22:53:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.219 22:53:16 -- common/autotest_common.sh@10 -- # set +x 00:43:56.219 ************************************ 00:43:56.219 START TEST keyring_file 00:43:56.219 ************************************ 00:43:56.219 22:53:16 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.219 * Looking for test storage... 00:43:56.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:56.219 22:53:16 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:56.219 22:53:16 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:56.219 22:53:16 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:56.219 22:53:16 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:56.220 22:53:16 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.220 22:53:16 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:56.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.220 --rc genhtml_branch_coverage=1 00:43:56.220 --rc genhtml_function_coverage=1 00:43:56.220 --rc genhtml_legend=1 00:43:56.220 --rc geninfo_all_blocks=1 00:43:56.220 --rc geninfo_unexecuted_blocks=1 00:43:56.220 00:43:56.220 ' 00:43:56.220 22:53:16 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:56.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.220 --rc genhtml_branch_coverage=1 00:43:56.220 --rc genhtml_function_coverage=1 00:43:56.220 --rc genhtml_legend=1 00:43:56.220 --rc geninfo_all_blocks=1 00:43:56.220 --rc geninfo_unexecuted_blocks=1 00:43:56.220 00:43:56.220 ' 00:43:56.220 22:53:16 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:56.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.220 --rc genhtml_branch_coverage=1 00:43:56.220 --rc genhtml_function_coverage=1 00:43:56.220 --rc genhtml_legend=1 00:43:56.220 --rc geninfo_all_blocks=1 00:43:56.220 --rc geninfo_unexecuted_blocks=1 00:43:56.220 00:43:56.220 ' 00:43:56.220 22:53:16 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:56.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.220 --rc genhtml_branch_coverage=1 00:43:56.220 --rc genhtml_function_coverage=1 00:43:56.220 --rc genhtml_legend=1 00:43:56.220 --rc geninfo_all_blocks=1 00:43:56.220 --rc geninfo_unexecuted_blocks=1 00:43:56.220 00:43:56.220 ' 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.220 22:53:16 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.220 22:53:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.220 22:53:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.220 22:53:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.220 22:53:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:56.220 22:53:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:56.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.m9fHX0SEfp 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.m9fHX0SEfp 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.m9fHX0SEfp 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.m9fHX0SEfp 00:43:56.220 22:53:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AU6x5cjpDg 00:43:56.220 22:53:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:56.220 22:53:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.221 22:53:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.221 22:53:16 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:56.221 22:53:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:56.221 22:53:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:56.221 22:53:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AU6x5cjpDg 00:43:56.221 22:53:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AU6x5cjpDg 00:43:56.221 22:53:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AU6x5cjpDg 00:43:56.221 22:53:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=659441 00:43:56.221 22:53:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:56.221 22:53:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 659441 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659441 ']' 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.221 22:53:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.221 [2024-12-14 22:53:16.946392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:56.221 [2024-12-14 22:53:16.946445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659441 ] 00:43:56.221 [2024-12-14 22:53:17.003125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.221 [2024-12-14 22:53:17.026221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:56.484 22:53:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.484 [2024-12-14 22:53:17.231789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:56.484 null0 00:43:56.484 [2024-12-14 22:53:17.263833] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:56.484 [2024-12-14 22:53:17.264124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.484 22:53:17 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.484 [2024-12-14 22:53:17.295909] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:56.484 request: 00:43:56.484 { 00:43:56.484 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:56.484 "secure_channel": false, 00:43:56.484 "listen_address": { 00:43:56.484 "trtype": "tcp", 00:43:56.484 "traddr": "127.0.0.1", 00:43:56.484 "trsvcid": "4420" 00:43:56.484 }, 00:43:56.484 "method": "nvmf_subsystem_add_listener", 00:43:56.484 "req_id": 1 00:43:56.484 } 00:43:56.484 Got JSON-RPC error response 00:43:56.484 response: 00:43:56.484 { 00:43:56.484 "code": -32602, 00:43:56.484 "message": "Invalid parameters" 00:43:56.484 } 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:56.484 22:53:17 keyring_file -- keyring/file.sh@47 -- # bperfpid=659446 00:43:56.484 22:53:17 keyring_file -- keyring/file.sh@49 -- # waitforlisten 659446 /var/tmp/bperf.sock 00:43:56.484 22:53:17 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659446 ']' 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:56.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.484 22:53:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.484 [2024-12-14 22:53:17.350297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:56.484 [2024-12-14 22:53:17.350336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659446 ] 00:43:56.742 [2024-12-14 22:53:17.425453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.742 [2024-12-14 22:53:17.447371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:56.742 22:53:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:56.742 22:53:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:56.742 22:53:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:43:56.742 22:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:43:57.001 22:53:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AU6x5cjpDg 00:43:57.001 22:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AU6x5cjpDg 00:43:57.260 22:53:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:57.260 22:53:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:57.260 22:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.260 22:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:57.260 22:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.260 22:53:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.m9fHX0SEfp == \/\t\m\p\/\t\m\p\.\m\9\f\H\X\0\S\E\f\p ]] 00:43:57.260 22:53:18 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:57.260 22:53:18 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:57.260 22:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.260 22:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.260 22:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:57.519 22:53:18 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AU6x5cjpDg == \/\t\m\p\/\t\m\p\.\A\U\6\x\5\c\j\p\D\g ]] 00:43:57.519 22:53:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:57.519 22:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:57.519 22:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:57.519 22:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.519 22:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.519 22:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:57.778 22:53:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:57.778 22:53:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:57.778 22:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:57.778 22:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:57.778 22:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:57.778 22:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.778 22:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.036 22:53:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:58.036 22:53:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:58.036 22:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:58.036 [2024-12-14 22:53:18.892581] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:58.294 nvme0n1 00:43:58.294 22:53:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:58.294 22:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:58.294 22:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:58.294 22:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.294 22:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:58.294 22:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.553 22:53:19 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:58.553 22:53:19 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:58.553 22:53:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:58.553 22:53:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:58.553 22:53:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.553 22:53:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:58.553 22:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.553 22:53:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:58.553 22:53:19 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:58.811 Running I/O for 1 seconds... 00:43:59.748 19143.00 IOPS, 74.78 MiB/s 00:43:59.748 Latency(us) 00:43:59.748 [2024-12-14T21:53:20.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:59.748 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:59.748 nvme0n1 : 1.00 19185.95 74.95 0.00 0.00 6659.25 2527.82 9986.44 00:43:59.748 [2024-12-14T21:53:20.632Z] =================================================================================================================== 00:43:59.748 [2024-12-14T21:53:20.632Z] Total : 19185.95 74.95 0.00 0.00 6659.25 2527.82 9986.44 00:43:59.748 { 00:43:59.748 "results": [ 00:43:59.748 { 00:43:59.748 "job": "nvme0n1", 00:43:59.748 "core_mask": "0x2", 00:43:59.748 "workload": "randrw", 00:43:59.748 "percentage": 50, 00:43:59.748 "status": "finished", 00:43:59.748 "queue_depth": 128, 00:43:59.748 "io_size": 4096, 00:43:59.748 "runtime": 1.004485, 00:43:59.748 "iops": 19185.9510097214, 00:43:59.748 "mibps": 74.94512113172422, 00:43:59.748 "io_failed": 0, 00:43:59.748 "io_timeout": 0, 00:43:59.748 "avg_latency_us": 6659.249579157524, 00:43:59.748 "min_latency_us": 2527.8171428571427, 00:43:59.748 "max_latency_us": 9986.438095238096 00:43:59.748 } 00:43:59.748 ], 00:43:59.748 "core_count": 1 00:43:59.748 } 00:43:59.748 22:53:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:59.749 22:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:00.008 22:53:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:00.008 22:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:00.008 22:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.008 22:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.008 22:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.008 22:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.266 22:53:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:00.267 22:53:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:00.267 22:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:00.267 22:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.267 22:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.267 22:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:00.267 22:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.267 22:53:21 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:00.267 22:53:21 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.267 22:53:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:00.267 22:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:00.525 [2024-12-14 22:53:21.265634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:00.525 [2024-12-14 22:53:21.265926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8950 (107): Transport endpoint is not connected 00:44:00.525 [2024-12-14 22:53:21.266921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8950 (9): Bad file descriptor 00:44:00.525 [2024-12-14 22:53:21.267922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:00.525 [2024-12-14 22:53:21.267930] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:00.525 [2024-12-14 22:53:21.267937] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:00.525 [2024-12-14 22:53:21.267945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:00.525 request: 00:44:00.525 { 00:44:00.525 "name": "nvme0", 00:44:00.525 "trtype": "tcp", 00:44:00.525 "traddr": "127.0.0.1", 00:44:00.525 "adrfam": "ipv4", 00:44:00.525 "trsvcid": "4420", 00:44:00.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:00.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:00.525 "prchk_reftag": false, 00:44:00.525 "prchk_guard": false, 00:44:00.525 "hdgst": false, 00:44:00.525 "ddgst": false, 00:44:00.525 "psk": "key1", 00:44:00.525 "allow_unrecognized_csi": false, 00:44:00.525 "method": "bdev_nvme_attach_controller", 00:44:00.525 "req_id": 1 00:44:00.525 } 00:44:00.525 Got JSON-RPC error response 00:44:00.525 response: 00:44:00.525 { 00:44:00.525 "code": -5, 00:44:00.525 "message": "Input/output error" 00:44:00.525 } 00:44:00.525 22:53:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:00.525 22:53:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:00.525 22:53:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:00.525 22:53:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:00.525 22:53:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:00.525 22:53:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:00.525 22:53:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.525 22:53:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.525 22:53:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.525 22:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.783 22:53:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:00.783 22:53:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:00.783 22:53:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:00.783 22:53:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.783 22:53:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.783 22:53:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:00.783 22:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.042 22:53:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:01.042 22:53:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:01.042 22:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:01.042 22:53:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:01.042 22:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:01.300 22:53:22 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:01.300 22:53:22 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:01.300 22:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.559 22:53:22 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:01.559 22:53:22 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.m9fHX0SEfp 00:44:01.559 22:53:22 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.559 22:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.559 [2024-12-14 22:53:22.427090] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.m9fHX0SEfp': 0100660 00:44:01.559 [2024-12-14 22:53:22.427114] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:01.559 request: 00:44:01.559 { 00:44:01.559 "name": "key0", 00:44:01.559 "path": "/tmp/tmp.m9fHX0SEfp", 00:44:01.559 "method": "keyring_file_add_key", 00:44:01.559 "req_id": 1 00:44:01.559 } 00:44:01.559 Got JSON-RPC error response 00:44:01.559 response: 00:44:01.559 { 00:44:01.559 "code": -1, 00:44:01.559 "message": "Operation not permitted" 00:44:01.559 } 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:01.559 22:53:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:01.818 22:53:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:01.818 22:53:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.m9fHX0SEfp 00:44:01.818 22:53:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m9fHX0SEfp 00:44:01.818 22:53:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.m9fHX0SEfp 00:44:01.818 22:53:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:01.818 22:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.077 22:53:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:02.077 22:53:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.077 22:53:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.077 22:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.336 [2024-12-14 22:53:23.012634] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.m9fHX0SEfp': No such file or directory 00:44:02.336 [2024-12-14 22:53:23.012653] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:02.336 [2024-12-14 22:53:23.012669] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:02.336 [2024-12-14 22:53:23.012692] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:02.336 [2024-12-14 22:53:23.012699] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:02.336 [2024-12-14 22:53:23.012705] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:02.336 request: 00:44:02.336 { 00:44:02.336 "name": "nvme0", 00:44:02.336 "trtype": "tcp", 00:44:02.336 "traddr": "127.0.0.1", 00:44:02.336 "adrfam": "ipv4", 00:44:02.336 "trsvcid": "4420", 00:44:02.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.336 "prchk_reftag": false, 00:44:02.336 "prchk_guard": false, 00:44:02.336 "hdgst": false, 00:44:02.336 "ddgst": false, 00:44:02.337 "psk": "key0", 00:44:02.337 "allow_unrecognized_csi": false, 00:44:02.337 "method": "bdev_nvme_attach_controller", 00:44:02.337 "req_id": 1 00:44:02.337 } 00:44:02.337 Got JSON-RPC error response 00:44:02.337 response: 00:44:02.337 { 00:44:02.337 "code": -19, 00:44:02.337 "message": "No such device" 00:44:02.337 } 00:44:02.337 22:53:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:02.337 22:53:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:02.337 22:53:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:02.337 22:53:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:02.337 22:53:23 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:02.337 22:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:02.596 22:53:23 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FUAhbSnqu5 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:02.596 22:53:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FUAhbSnqu5 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FUAhbSnqu5 00:44:02.596 22:53:23 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FUAhbSnqu5 00:44:02.596 22:53:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUAhbSnqu5 00:44:02.596 22:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FUAhbSnqu5 00:44:02.855 22:53:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.855 22:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.855 nvme0n1 00:44:03.114 22:53:23 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.114 22:53:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:03.114 22:53:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:03.114 22:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:03.372 22:53:24 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:03.372 22:53:24 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:03.372 22:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.372 22:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.372 22:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.631 22:53:24 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:03.631 22:53:24 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:03.631 22:53:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.631 22:53:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.631 22:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.631 22:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.631 22:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.889 22:53:24 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:03.889 22:53:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:03.889 22:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:03.890 22:53:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:03.890 22:53:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:03.890 22:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.148 22:53:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:04.148 22:53:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUAhbSnqu5 00:44:04.148 22:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FUAhbSnqu5 00:44:04.414 22:53:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AU6x5cjpDg 00:44:04.414 22:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AU6x5cjpDg 00:44:04.675 22:53:25 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:04.675 22:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:04.675 nvme0n1 00:44:04.933 22:53:25 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:04.933 22:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:05.192 22:53:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:05.192 "subsystems": [ 00:44:05.192 { 00:44:05.192 "subsystem": "keyring", 00:44:05.192 "config": [ 00:44:05.192 { 00:44:05.192 "method": "keyring_file_add_key", 00:44:05.192 "params": { 00:44:05.192 "name": "key0", 00:44:05.192 "path": "/tmp/tmp.FUAhbSnqu5" 00:44:05.192 } 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "method": "keyring_file_add_key", 00:44:05.192 "params": { 00:44:05.192 "name": "key1", 00:44:05.192 "path": "/tmp/tmp.AU6x5cjpDg" 00:44:05.192 } 00:44:05.192 } 00:44:05.192 ] 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "subsystem": "iobuf", 00:44:05.192 "config": [ 00:44:05.192 { 00:44:05.192 "method": "iobuf_set_options", 00:44:05.192 "params": { 00:44:05.192 "small_pool_count": 8192, 00:44:05.192 "large_pool_count": 1024, 00:44:05.192 "small_bufsize": 8192, 00:44:05.192 "large_bufsize": 135168, 00:44:05.192 "enable_numa": false 00:44:05.192 } 00:44:05.192 } 00:44:05.192 ] 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "subsystem": "sock", 00:44:05.192 "config": [ 00:44:05.192 { 00:44:05.192 "method": "sock_set_default_impl", 00:44:05.192 "params": { 00:44:05.192 "impl_name": "posix" 00:44:05.192 } 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "method": "sock_impl_set_options", 00:44:05.192 "params": { 00:44:05.192 "impl_name": "ssl", 00:44:05.192 "recv_buf_size": 4096, 00:44:05.192 "send_buf_size": 4096, 00:44:05.192 "enable_recv_pipe": true, 00:44:05.192 "enable_quickack": false, 00:44:05.192 "enable_placement_id": 0, 00:44:05.192 "enable_zerocopy_send_server": true, 00:44:05.192 "enable_zerocopy_send_client": false, 00:44:05.192 "zerocopy_threshold": 0, 00:44:05.192 "tls_version": 0, 00:44:05.192 "enable_ktls": false 00:44:05.192 } 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "method": "sock_impl_set_options", 00:44:05.192 "params": { 00:44:05.192 "impl_name": "posix", 00:44:05.192 "recv_buf_size": 2097152, 00:44:05.192 "send_buf_size": 2097152, 00:44:05.192 "enable_recv_pipe": true, 00:44:05.192 "enable_quickack": false, 00:44:05.192 "enable_placement_id": 0, 00:44:05.192 "enable_zerocopy_send_server": true, 00:44:05.192 "enable_zerocopy_send_client": false, 00:44:05.192 "zerocopy_threshold": 0, 00:44:05.192 "tls_version": 0, 00:44:05.192 "enable_ktls": false 00:44:05.192 } 00:44:05.192 } 00:44:05.192 ] 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "subsystem": "vmd", 00:44:05.192 "config": [] 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "subsystem": "accel", 00:44:05.192 "config": [ 00:44:05.192 { 00:44:05.192 "method": "accel_set_options", 00:44:05.192 "params": { 00:44:05.192 "small_cache_size": 128, 00:44:05.192 "large_cache_size": 16, 00:44:05.192 "task_count": 2048, 00:44:05.192 "sequence_count": 2048, 00:44:05.192 "buf_count": 2048 00:44:05.192 } 00:44:05.192 } 00:44:05.192 ] 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "subsystem": "bdev", 00:44:05.192 "config": [ 00:44:05.192 { 00:44:05.192 "method": "bdev_set_options", 00:44:05.192 "params": { 00:44:05.192 "bdev_io_pool_size": 65535, 00:44:05.192 "bdev_io_cache_size": 256, 00:44:05.192 "bdev_auto_examine": true, 00:44:05.192 "iobuf_small_cache_size": 128, 00:44:05.192 "iobuf_large_cache_size": 16 00:44:05.192 } 00:44:05.192 }, 00:44:05.192 { 00:44:05.192 "method": "bdev_raid_set_options", 00:44:05.193 "params": { 00:44:05.193 "process_window_size_kb": 1024, 00:44:05.193 "process_max_bandwidth_mb_sec": 0 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "bdev_iscsi_set_options", 00:44:05.193 "params": { 00:44:05.193 "timeout_sec": 30 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "bdev_nvme_set_options", 00:44:05.193 "params": { 00:44:05.193 "action_on_timeout": "none", 00:44:05.193 "timeout_us": 0, 00:44:05.193 "timeout_admin_us": 0, 00:44:05.193 "keep_alive_timeout_ms": 10000, 00:44:05.193 "arbitration_burst": 0, 00:44:05.193 "low_priority_weight": 0, 00:44:05.193 "medium_priority_weight": 0, 00:44:05.193 "high_priority_weight": 0, 00:44:05.193 "nvme_adminq_poll_period_us": 10000, 00:44:05.193 "nvme_ioq_poll_period_us": 0, 00:44:05.193 "io_queue_requests": 512, 00:44:05.193 "delay_cmd_submit": true, 00:44:05.193 "transport_retry_count": 4, 00:44:05.193 "bdev_retry_count": 3, 00:44:05.193 "transport_ack_timeout": 0, 00:44:05.193 "ctrlr_loss_timeout_sec": 0, 00:44:05.193 "reconnect_delay_sec": 0, 00:44:05.193 "fast_io_fail_timeout_sec": 0, 00:44:05.193 "disable_auto_failback": false, 00:44:05.193 "generate_uuids": false, 00:44:05.193 "transport_tos": 0, 00:44:05.193 "nvme_error_stat": false, 00:44:05.193 "rdma_srq_size": 0, 00:44:05.193 "io_path_stat": false, 00:44:05.193 "allow_accel_sequence": false, 00:44:05.193 "rdma_max_cq_size": 0, 00:44:05.193 "rdma_cm_event_timeout_ms": 0, 00:44:05.193 "dhchap_digests": [ 00:44:05.193 "sha256", 00:44:05.193 "sha384", 00:44:05.193 "sha512" 00:44:05.193 ], 00:44:05.193 "dhchap_dhgroups": [ 00:44:05.193 "null", 00:44:05.193 "ffdhe2048", 00:44:05.193 "ffdhe3072", 00:44:05.193 "ffdhe4096", 00:44:05.193 "ffdhe6144", 00:44:05.193 "ffdhe8192" 00:44:05.193 ], 00:44:05.193 "rdma_umr_per_io": false 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "bdev_nvme_attach_controller", 00:44:05.193 "params": { 00:44:05.193 "name": "nvme0", 00:44:05.193 "trtype": "TCP", 00:44:05.193 "adrfam": "IPv4", 00:44:05.193 "traddr": "127.0.0.1", 00:44:05.193 "trsvcid": "4420", 00:44:05.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.193 "prchk_reftag": false, 00:44:05.193 "prchk_guard": false, 00:44:05.193 "ctrlr_loss_timeout_sec": 0, 00:44:05.193 "reconnect_delay_sec": 0, 00:44:05.193 "fast_io_fail_timeout_sec": 0, 00:44:05.193 "psk": "key0", 00:44:05.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.193 "hdgst": false, 00:44:05.193 "ddgst": false, 00:44:05.193 "multipath": "multipath" 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "bdev_nvme_set_hotplug", 00:44:05.193 "params": { 00:44:05.193 "period_us": 100000, 00:44:05.193 "enable": false 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "bdev_wait_for_examine" 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "nbd", 00:44:05.193 "config": [] 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }' 00:44:05.193 22:53:25 keyring_file -- keyring/file.sh@115 -- # killprocess 659446 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659446 ']' 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659446 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659446 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659446' 00:44:05.193 killing process with pid 659446 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@973 -- # kill 659446 00:44:05.193 Received shutdown signal, test time was about 1.000000 seconds 00:44:05.193 00:44:05.193 Latency(us) 00:44:05.193 [2024-12-14T21:53:26.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.193 [2024-12-14T21:53:26.077Z] =================================================================================================================== 00:44:05.193 [2024-12-14T21:53:26.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:05.193 22:53:25 keyring_file -- common/autotest_common.sh@978 -- # wait 659446 00:44:05.193 22:53:26 keyring_file -- keyring/file.sh@118 -- # bperfpid=660924 00:44:05.193 22:53:26 keyring_file -- keyring/file.sh@120 -- # waitforlisten 660924 /var/tmp/bperf.sock 00:44:05.193 22:53:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 660924 ']' 00:44:05.193 22:53:26 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:05.193 22:53:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:05.193 22:53:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:05.193 22:53:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:05.193 22:53:26 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:05.193 "subsystems": [ 00:44:05.193 { 00:44:05.193 "subsystem": "keyring", 00:44:05.193 "config": [ 00:44:05.193 { 00:44:05.193 "method": "keyring_file_add_key", 00:44:05.193 "params": { 00:44:05.193 "name": "key0", 00:44:05.193 "path": "/tmp/tmp.FUAhbSnqu5" 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "keyring_file_add_key", 00:44:05.193 "params": { 00:44:05.193 "name": "key1", 00:44:05.193 "path": "/tmp/tmp.AU6x5cjpDg" 00:44:05.193 } 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "iobuf", 00:44:05.193 "config": [ 00:44:05.193 { 00:44:05.193 "method": "iobuf_set_options", 00:44:05.193 "params": { 00:44:05.193 "small_pool_count": 8192, 00:44:05.193 "large_pool_count": 1024, 00:44:05.193 "small_bufsize": 8192, 00:44:05.193 "large_bufsize": 135168, 00:44:05.193 "enable_numa": false 00:44:05.193 } 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "sock", 00:44:05.193 "config": [ 00:44:05.193 { 00:44:05.193 "method": "sock_set_default_impl", 00:44:05.193 "params": { 00:44:05.193 "impl_name": "posix" 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "sock_impl_set_options", 00:44:05.193 "params": { 00:44:05.193 "impl_name": "ssl", 00:44:05.193 "recv_buf_size": 4096, 00:44:05.193 "send_buf_size": 4096, 00:44:05.193 "enable_recv_pipe": true, 00:44:05.193 "enable_quickack": false, 00:44:05.193 "enable_placement_id": 0, 00:44:05.193 "enable_zerocopy_send_server": true, 00:44:05.193 "enable_zerocopy_send_client": false, 00:44:05.193 "zerocopy_threshold": 0, 00:44:05.193 "tls_version": 0, 00:44:05.193 "enable_ktls": false 00:44:05.193 } 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "method": "sock_impl_set_options", 00:44:05.193 "params": { 00:44:05.193 "impl_name": "posix", 00:44:05.193 "recv_buf_size": 2097152, 00:44:05.193 "send_buf_size": 2097152, 00:44:05.193 "enable_recv_pipe": true, 00:44:05.193 "enable_quickack": false, 00:44:05.193 "enable_placement_id": 0, 00:44:05.193 "enable_zerocopy_send_server": true, 00:44:05.193 "enable_zerocopy_send_client": false, 00:44:05.193 "zerocopy_threshold": 0, 00:44:05.193 "tls_version": 0, 00:44:05.193 "enable_ktls": false 00:44:05.193 } 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "vmd", 00:44:05.193 "config": [] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "accel", 00:44:05.193 "config": [ 00:44:05.193 { 00:44:05.193 "method": "accel_set_options", 00:44:05.193 "params": { 00:44:05.193 "small_cache_size": 128, 00:44:05.193 "large_cache_size": 16, 00:44:05.193 "task_count": 2048, 00:44:05.193 "sequence_count": 2048, 00:44:05.193 "buf_count": 2048 00:44:05.193 } 00:44:05.193 } 00:44:05.193 ] 00:44:05.193 }, 00:44:05.193 { 00:44:05.193 "subsystem": "bdev", 00:44:05.193 "config": [ 00:44:05.193 { 00:44:05.193 "method": "bdev_set_options", 00:44:05.193 "params": { 00:44:05.193 "bdev_io_pool_size": 65535, 00:44:05.193 "bdev_io_cache_size": 256, 00:44:05.193 "bdev_auto_examine": true, 00:44:05.194 "iobuf_small_cache_size": 128, 00:44:05.194 "iobuf_large_cache_size": 16 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_raid_set_options", 00:44:05.194 "params": { 00:44:05.194 "process_window_size_kb": 1024, 00:44:05.194 "process_max_bandwidth_mb_sec": 0 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_iscsi_set_options", 00:44:05.194 "params": { 00:44:05.194 "timeout_sec": 30 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_nvme_set_options", 00:44:05.194 "params": { 00:44:05.194 "action_on_timeout": "none", 00:44:05.194 "timeout_us": 0, 00:44:05.194 "timeout_admin_us": 0, 00:44:05.194 "keep_alive_timeout_ms": 10000, 00:44:05.194 "arbitration_burst": 0, 00:44:05.194 "low_priority_weight": 0, 00:44:05.194 "medium_priority_weight": 0, 00:44:05.194 "high_priority_weight": 0, 00:44:05.194 "nvme_adminq_poll_period_us": 10000, 00:44:05.194 "nvme_ioq_poll_period_us": 0, 00:44:05.194 "io_queue_requests": 512, 00:44:05.194 "delay_cmd_submit": true, 00:44:05.194 "transport_retry_count": 4, 00:44:05.194 "bdev_retry_count": 3, 00:44:05.194 "transport_ack_timeout": 0, 00:44:05.194 "ctrlr_loss_timeout_sec": 0, 00:44:05.194 "reconnect_delay_sec": 0, 00:44:05.194 "fast_io_fail_timeout_sec": 0, 00:44:05.194 "disable_auto_failback": false, 00:44:05.194 "generate_uuids": false, 00:44:05.194 "transport_tos": 0, 00:44:05.194 "nvme_error_stat": false, 00:44:05.194 "rdma_srq_size": 0, 00:44:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:05.194 "io_path_stat": false, 00:44:05.194 "allow_accel_sequence": false, 00:44:05.194 "rdma_max_cq_size": 0, 00:44:05.194 "rdma_cm_event_timeout_ms": 0, 00:44:05.194 "dhchap_digests": [ 00:44:05.194 "sha256", 00:44:05.194 "sha384", 00:44:05.194 "sha512" 00:44:05.194 ], 00:44:05.194 "dhchap_dhgroups": [ 00:44:05.194 "null", 00:44:05.194 "ffdhe2048", 00:44:05.194 "ffdhe3072", 00:44:05.194 "ffdhe4096", 00:44:05.194 "ffdhe6144", 00:44:05.194 "ffdhe8192" 00:44:05.194 ], 00:44:05.194 "rdma_umr_per_io": false 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_nvme_attach_controller", 00:44:05.194 "params": { 00:44:05.194 "name": "nvme0", 00:44:05.194 "trtype": "TCP", 00:44:05.194 "adrfam": "IPv4", 00:44:05.194 "traddr": "127.0.0.1", 00:44:05.194 "trsvcid": "4420", 00:44:05.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.194 "prchk_reftag": false, 00:44:05.194 "prchk_guard": false, 00:44:05.194 "ctrlr_loss_timeout_sec": 0, 00:44:05.194 "reconnect_delay_sec": 0, 00:44:05.194 "fast_io_fail_timeout_sec": 0, 00:44:05.194 "psk": "key0", 00:44:05.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.194 "hdgst": false, 00:44:05.194 "ddgst": false, 00:44:05.194 "multipath": "multipath" 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_nvme_set_hotplug", 00:44:05.194 "params": { 00:44:05.194 "period_us": 100000, 00:44:05.194 "enable": false 00:44:05.194 } 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "method": "bdev_wait_for_examine" 00:44:05.194 } 00:44:05.194 ] 00:44:05.194 }, 00:44:05.194 { 00:44:05.194 "subsystem": "nbd", 00:44:05.194 "config": [] 00:44:05.194 } 00:44:05.194 ] 00:44:05.194 }' 00:44:05.194 22:53:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:05.194 22:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.194 [2024-12-14 22:53:26.071733] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:05.194 [2024-12-14 22:53:26.071784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660924 ] 00:44:05.453 [2024-12-14 22:53:26.147668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:05.453 [2024-12-14 22:53:26.167144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:05.453 [2024-12-14 22:53:26.323554] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:06.389 22:53:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:06.389 22:53:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:06.389 22:53:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:06.389 22:53:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:06.389 22:53:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.389 22:53:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:06.389 22:53:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:06.389 22:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:06.389 22:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:06.389 22:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:06.389 22:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:06.389 22:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.648 22:53:27 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:06.648 22:53:27 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.648 22:53:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:06.648 22:53:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:06.648 22:53:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:06.648 22:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:06.907 22:53:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:06.907 22:53:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:06.907 22:53:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FUAhbSnqu5 /tmp/tmp.AU6x5cjpDg 00:44:06.907 22:53:27 keyring_file -- keyring/file.sh@20 -- # killprocess 660924 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 660924 ']' 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 660924 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660924 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660924' 00:44:06.907 killing process with pid 660924 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@973 -- # kill 660924 00:44:06.907 Received shutdown signal, test time was about 1.000000 seconds 00:44:06.907 00:44:06.907 Latency(us) 00:44:06.907 [2024-12-14T21:53:27.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:06.907 [2024-12-14T21:53:27.791Z] =================================================================================================================== 00:44:06.907 [2024-12-14T21:53:27.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:06.907 22:53:27 keyring_file -- common/autotest_common.sh@978 -- # wait 660924 00:44:07.166 22:53:27 keyring_file -- keyring/file.sh@21 -- # killprocess 659441 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659441 ']' 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659441 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659441 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659441' 00:44:07.166 killing process with pid 659441 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@973 -- # kill 659441 00:44:07.166 22:53:27 keyring_file -- common/autotest_common.sh@978 -- # wait 659441 00:44:07.426 00:44:07.426 real 0m11.668s 00:44:07.426 user 0m29.004s 00:44:07.426 sys 0m2.753s 00:44:07.426 22:53:28 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:07.426 22:53:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:07.426 ************************************ 00:44:07.426 END TEST keyring_file 00:44:07.426 ************************************ 00:44:07.426 22:53:28 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:07.426 22:53:28 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:07.426 22:53:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:07.426 22:53:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:07.426 22:53:28 -- common/autotest_common.sh@10 -- # set +x 00:44:07.686 ************************************ 00:44:07.686 START TEST keyring_linux 00:44:07.686 ************************************ 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:07.686 Joined session keyring: 297154660 00:44:07.686 * Looking for test storage... 00:44:07.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:07.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:07.686 --rc genhtml_branch_coverage=1 00:44:07.686 --rc genhtml_function_coverage=1 00:44:07.686 --rc genhtml_legend=1 00:44:07.686 --rc geninfo_all_blocks=1 00:44:07.686 --rc geninfo_unexecuted_blocks=1 00:44:07.686 00:44:07.686 ' 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:07.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:07.686 --rc genhtml_branch_coverage=1 00:44:07.686 --rc genhtml_function_coverage=1 00:44:07.686 --rc genhtml_legend=1 00:44:07.686 --rc geninfo_all_blocks=1 00:44:07.686 --rc geninfo_unexecuted_blocks=1 00:44:07.686 00:44:07.686 ' 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:07.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:07.686 --rc genhtml_branch_coverage=1 00:44:07.686 --rc genhtml_function_coverage=1 00:44:07.686 --rc genhtml_legend=1 00:44:07.686 --rc geninfo_all_blocks=1 00:44:07.686 --rc geninfo_unexecuted_blocks=1 00:44:07.686 00:44:07.686 ' 00:44:07.686 22:53:28 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:07.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:07.686 --rc genhtml_branch_coverage=1 00:44:07.686 --rc genhtml_function_coverage=1 00:44:07.686 --rc genhtml_legend=1 00:44:07.686 --rc geninfo_all_blocks=1 00:44:07.686 --rc geninfo_unexecuted_blocks=1 00:44:07.686 00:44:07.686 ' 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:07.686 22:53:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:07.686 22:53:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:07.686 22:53:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.686 22:53:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.686 22:53:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.686 22:53:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:07.686 22:53:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:07.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:07.686 22:53:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:07.686 22:53:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:07.686 22:53:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:07.686 22:53:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:07.686 22:53:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:07.686 22:53:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:07.687 22:53:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:07.687 22:53:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:07.687 22:53:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:07.687 22:53:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:07.946 /tmp/:spdk-test:key0 00:44:07.946 22:53:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:07.946 22:53:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:07.946 22:53:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:07.946 /tmp/:spdk-test:key1 00:44:07.946 22:53:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=661469 00:44:07.946 22:53:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 661469 00:44:07.946 22:53:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661469 ']' 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:07.946 22:53:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:07.946 [2024-12-14 22:53:28.659241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:07.946 [2024-12-14 22:53:28.659290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661469 ] 00:44:07.946 [2024-12-14 22:53:28.732894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:07.946 [2024-12-14 22:53:28.755812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.205 22:53:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:08.205 22:53:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:08.205 22:53:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:08.205 22:53:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.205 22:53:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.205 [2024-12-14 22:53:28.965921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:08.205 null0 00:44:08.205 [2024-12-14 22:53:28.997973] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:08.205 [2024-12-14 22:53:28.998257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.206 22:53:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:08.206 278193148 00:44:08.206 22:53:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:08.206 898578213 00:44:08.206 22:53:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=661475 00:44:08.206 22:53:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 661475 /var/tmp/bperf.sock 00:44:08.206 22:53:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661475 ']' 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:08.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:08.206 22:53:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.206 [2024-12-14 22:53:29.067074] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:08.206 [2024-12-14 22:53:29.067113] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661475 ] 00:44:08.464 [2024-12-14 22:53:29.139965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.464 [2024-12-14 22:53:29.161657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:08.464 22:53:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:08.464 22:53:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:08.464 22:53:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:08.464 22:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:08.723 22:53:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:08.723 22:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:08.981 22:53:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:08.981 22:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:08.981 [2024-12-14 22:53:29.837141] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:09.240 nvme0n1 00:44:09.240 22:53:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:09.240 22:53:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:09.240 22:53:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:09.240 22:53:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:09.240 22:53:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:09.240 22:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:09.240 22:53:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:09.240 22:53:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:09.240 22:53:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:09.240 22:53:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:09.240 22:53:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:09.240 22:53:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:09.240 22:53:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@25 -- # sn=278193148 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 278193148 == \2\7\8\1\9\3\1\4\8 ]] 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 278193148 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:09.499 22:53:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:09.758 Running I/O for 1 seconds... 00:44:10.694 21342.00 IOPS, 83.37 MiB/s 00:44:10.694 Latency(us) 00:44:10.694 [2024-12-14T21:53:31.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.694 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:10.694 nvme0n1 : 1.01 21344.15 83.38 0.00 0.00 5977.51 5118.05 12545.46 00:44:10.694 [2024-12-14T21:53:31.578Z] =================================================================================================================== 00:44:10.694 [2024-12-14T21:53:31.578Z] Total : 21344.15 83.38 0.00 0.00 5977.51 5118.05 12545.46 00:44:10.694 { 00:44:10.694 "results": [ 00:44:10.694 { 00:44:10.694 "job": "nvme0n1", 00:44:10.694 "core_mask": "0x2", 00:44:10.694 "workload": "randread", 00:44:10.694 "status": "finished", 00:44:10.694 "queue_depth": 128, 00:44:10.694 "io_size": 4096, 00:44:10.694 "runtime": 1.005896, 00:44:10.694 "iops": 21344.154862928175, 00:44:10.694 "mibps": 83.37560493331318, 00:44:10.694 "io_failed": 0, 00:44:10.694 "io_timeout": 0, 00:44:10.694 "avg_latency_us": 5977.506916317342, 00:44:10.694 "min_latency_us": 5118.049523809524, 00:44:10.694 "max_latency_us": 12545.462857142857 00:44:10.694 } 00:44:10.694 ], 00:44:10.694 "core_count": 1 00:44:10.694 } 00:44:10.694 22:53:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:10.694 22:53:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:10.953 22:53:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:10.953 22:53:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:10.953 22:53:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:10.953 22:53:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:10.953 22:53:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:10.953 22:53:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.212 22:53:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:11.212 22:53:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:11.212 22:53:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:11.212 22:53:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:11.212 22:53:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.212 22:53:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.212 [2024-12-14 22:53:32.047731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:11.212 [2024-12-14 22:53:32.048473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e700 (107): Transport endpoint is not connected 00:44:11.212 [2024-12-14 22:53:32.049468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e700 (9): Bad file descriptor 00:44:11.212 [2024-12-14 22:53:32.050468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:11.212 [2024-12-14 22:53:32.050478] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:11.212 [2024-12-14 22:53:32.050485] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:11.212 [2024-12-14 22:53:32.050493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:11.212 request: 00:44:11.212 { 00:44:11.212 "name": "nvme0", 00:44:11.212 "trtype": "tcp", 00:44:11.212 "traddr": "127.0.0.1", 00:44:11.212 "adrfam": "ipv4", 00:44:11.212 "trsvcid": "4420", 00:44:11.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:11.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:11.212 "prchk_reftag": false, 00:44:11.212 "prchk_guard": false, 00:44:11.212 "hdgst": false, 00:44:11.212 "ddgst": false, 00:44:11.212 "psk": ":spdk-test:key1", 00:44:11.212 "allow_unrecognized_csi": false, 00:44:11.212 "method": "bdev_nvme_attach_controller", 00:44:11.212 "req_id": 1 00:44:11.212 } 00:44:11.212 Got JSON-RPC error response 00:44:11.212 response: 00:44:11.212 { 00:44:11.212 "code": -5, 00:44:11.212 "message": "Input/output error" 00:44:11.212 } 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@33 -- # sn=278193148 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 278193148 00:44:11.212 1 links removed 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@33 -- # sn=898578213 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 898578213 00:44:11.212 1 links removed 00:44:11.212 22:53:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 661475 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661475 ']' 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661475 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.212 22:53:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661475 00:44:11.471 22:53:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:11.471 22:53:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:11.471 22:53:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661475' 00:44:11.471 killing process with pid 661475 00:44:11.471 22:53:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 661475 00:44:11.472 Received shutdown signal, test time was about 1.000000 seconds 00:44:11.472 00:44:11.472 Latency(us) 00:44:11.472 [2024-12-14T21:53:32.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.472 [2024-12-14T21:53:32.356Z] =================================================================================================================== 00:44:11.472 [2024-12-14T21:53:32.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 661475 00:44:11.472 22:53:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 661469 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661469 ']' 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661469 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661469 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661469' 00:44:11.472 killing process with pid 661469 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 661469 00:44:11.472 22:53:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 661469 00:44:12.039 00:44:12.039 real 0m4.309s 00:44:12.039 user 0m8.193s 00:44:12.039 sys 0m1.436s 00:44:12.039 22:53:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:12.039 22:53:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:12.039 ************************************ 00:44:12.039 END TEST keyring_linux 00:44:12.039 ************************************ 00:44:12.039 22:53:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:12.039 22:53:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:12.039 22:53:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:12.039 22:53:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:12.039 22:53:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:12.039 22:53:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:12.039 22:53:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:12.039 22:53:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:12.039 22:53:32 -- common/autotest_common.sh@10 -- # set +x 00:44:12.040 22:53:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:12.040 22:53:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:12.040 22:53:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:12.040 22:53:32 -- common/autotest_common.sh@10 -- # set +x 00:44:17.310 INFO: APP EXITING 00:44:17.310 INFO: killing all VMs 00:44:17.310 INFO: killing vhost app 00:44:17.310 INFO: EXIT DONE 00:44:20.598 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:20.598 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:20.598 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:23.133 Cleaning 00:44:23.133 Removing: /var/run/dpdk/spdk0/config 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:23.133 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:23.133 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:23.133 Removing: /var/run/dpdk/spdk1/config 00:44:23.133 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:23.133 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:23.133 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:23.133 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:23.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:23.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:23.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:23.392 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:23.392 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:23.392 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:23.392 Removing: /var/run/dpdk/spdk2/config 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:23.392 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:23.392 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:23.392 Removing: /var/run/dpdk/spdk3/config 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:23.392 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:23.392 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:23.392 Removing: /var/run/dpdk/spdk4/config 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:23.392 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:23.392 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:23.392 Removing: /dev/shm/bdev_svc_trace.1 00:44:23.392 Removing: /dev/shm/nvmf_trace.0 00:44:23.392 Removing: /dev/shm/spdk_tgt_trace.pid104108 00:44:23.392 Removing: /var/run/dpdk/spdk0 00:44:23.392 Removing: /var/run/dpdk/spdk1 00:44:23.392 Removing: /var/run/dpdk/spdk2 00:44:23.392 Removing: /var/run/dpdk/spdk3 00:44:23.392 Removing: /var/run/dpdk/spdk4 00:44:23.392 Removing: /var/run/dpdk/spdk_pid102016 00:44:23.392 Removing: /var/run/dpdk/spdk_pid103050 00:44:23.392 Removing: /var/run/dpdk/spdk_pid104108 00:44:23.392 Removing: /var/run/dpdk/spdk_pid104731 00:44:23.392 Removing: /var/run/dpdk/spdk_pid105653 00:44:23.392 Removing: /var/run/dpdk/spdk_pid105732 00:44:23.392 Removing: /var/run/dpdk/spdk_pid106779 00:44:23.392 Removing: /var/run/dpdk/spdk_pid106840 00:44:23.392 Removing: /var/run/dpdk/spdk_pid107186 00:44:23.392 Removing: /var/run/dpdk/spdk_pid108661 00:44:23.392 Removing: /var/run/dpdk/spdk_pid109920 00:44:23.392 Removing: /var/run/dpdk/spdk_pid110374 00:44:23.392 Removing: /var/run/dpdk/spdk_pid110534 00:44:23.392 Removing: /var/run/dpdk/spdk_pid110796 00:44:23.392 Removing: /var/run/dpdk/spdk_pid111080 00:44:23.392 Removing: /var/run/dpdk/spdk_pid111326 00:44:23.652 Removing: /var/run/dpdk/spdk_pid111572 00:44:23.652 Removing: /var/run/dpdk/spdk_pid111854 00:44:23.652 Removing: /var/run/dpdk/spdk_pid112577 00:44:23.652 Removing: /var/run/dpdk/spdk_pid115683 00:44:23.652 Removing: /var/run/dpdk/spdk_pid115759 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116013 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116040 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116495 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116648 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116978 00:44:23.652 Removing: /var/run/dpdk/spdk_pid116986 00:44:23.652 Removing: /var/run/dpdk/spdk_pid117358 00:44:23.652 Removing: /var/run/dpdk/spdk_pid117457 00:44:23.652 Removing: /var/run/dpdk/spdk_pid117670 00:44:23.652 Removing: /var/run/dpdk/spdk_pid117716 00:44:23.652 Removing: /var/run/dpdk/spdk_pid118399 00:44:23.652 Removing: /var/run/dpdk/spdk_pid118564 00:44:23.652 Removing: /var/run/dpdk/spdk_pid118881 00:44:23.652 Removing: /var/run/dpdk/spdk_pid122970 00:44:23.652 Removing: /var/run/dpdk/spdk_pid127153 00:44:23.652 Removing: /var/run/dpdk/spdk_pid137227 00:44:23.652 Removing: /var/run/dpdk/spdk_pid137899 00:44:23.652 Removing: /var/run/dpdk/spdk_pid142097 00:44:23.652 Removing: /var/run/dpdk/spdk_pid142331 00:44:23.652 Removing: /var/run/dpdk/spdk_pid146591 00:44:23.652 Removing: /var/run/dpdk/spdk_pid152493 00:44:23.652 Removing: /var/run/dpdk/spdk_pid155029 00:44:23.652 Removing: /var/run/dpdk/spdk_pid165242 00:44:23.652 Removing: /var/run/dpdk/spdk_pid174522 00:44:23.652 Removing: /var/run/dpdk/spdk_pid176303 00:44:23.652 Removing: /var/run/dpdk/spdk_pid177210 00:44:23.652 Removing: /var/run/dpdk/spdk_pid193943 00:44:23.652 Removing: /var/run/dpdk/spdk_pid197850 00:44:23.652 Removing: /var/run/dpdk/spdk_pid279476 00:44:23.652 Removing: /var/run/dpdk/spdk_pid284635 00:44:23.652 Removing: /var/run/dpdk/spdk_pid290281 00:44:23.652 Removing: /var/run/dpdk/spdk_pid297174 00:44:23.652 Removing: /var/run/dpdk/spdk_pid297177 00:44:23.652 Removing: /var/run/dpdk/spdk_pid298074 00:44:23.652 Removing: /var/run/dpdk/spdk_pid298972 00:44:23.652 Removing: /var/run/dpdk/spdk_pid299820 00:44:23.652 Removing: /var/run/dpdk/spdk_pid300337 00:44:23.652 Removing: /var/run/dpdk/spdk_pid300339 00:44:23.652 Removing: /var/run/dpdk/spdk_pid300582 00:44:23.652 Removing: /var/run/dpdk/spdk_pid300798 00:44:23.652 Removing: /var/run/dpdk/spdk_pid300805 00:44:23.652 Removing: /var/run/dpdk/spdk_pid301699 00:44:23.652 Removing: /var/run/dpdk/spdk_pid302584 00:44:23.652 Removing: /var/run/dpdk/spdk_pid303471 00:44:23.652 Removing: /var/run/dpdk/spdk_pid303929 00:44:23.652 Removing: /var/run/dpdk/spdk_pid303941 00:44:23.652 Removing: /var/run/dpdk/spdk_pid304273 00:44:23.652 Removing: /var/run/dpdk/spdk_pid305366 00:44:23.652 Removing: /var/run/dpdk/spdk_pid306319 00:44:23.652 Removing: /var/run/dpdk/spdk_pid314431 00:44:23.652 Removing: /var/run/dpdk/spdk_pid343142 00:44:23.652 Removing: /var/run/dpdk/spdk_pid347533 00:44:23.652 Removing: /var/run/dpdk/spdk_pid349278 00:44:23.652 Removing: /var/run/dpdk/spdk_pid350923 00:44:23.652 Removing: /var/run/dpdk/spdk_pid351102 00:44:23.652 Removing: /var/run/dpdk/spdk_pid351319 00:44:23.652 Removing: /var/run/dpdk/spdk_pid351347 00:44:23.652 Removing: /var/run/dpdk/spdk_pid351838 00:44:23.652 Removing: /var/run/dpdk/spdk_pid353618 00:44:23.912 Removing: /var/run/dpdk/spdk_pid354368 00:44:23.912 Removing: /var/run/dpdk/spdk_pid354852 00:44:23.912 Removing: /var/run/dpdk/spdk_pid357004 00:44:23.912 Removing: /var/run/dpdk/spdk_pid357377 00:44:23.912 Removing: /var/run/dpdk/spdk_pid358071 00:44:23.912 Removing: /var/run/dpdk/spdk_pid362049 00:44:23.912 Removing: /var/run/dpdk/spdk_pid367856 00:44:23.912 Removing: /var/run/dpdk/spdk_pid367858 00:44:23.912 Removing: /var/run/dpdk/spdk_pid367860 00:44:23.912 Removing: /var/run/dpdk/spdk_pid371749 00:44:23.912 Removing: /var/run/dpdk/spdk_pid375445 00:44:23.912 Removing: /var/run/dpdk/spdk_pid380332 00:44:23.912 Removing: /var/run/dpdk/spdk_pid415816 00:44:23.912 Removing: /var/run/dpdk/spdk_pid419865 00:44:23.912 Removing: /var/run/dpdk/spdk_pid425751 00:44:23.912 Removing: /var/run/dpdk/spdk_pid427010 00:44:23.912 Removing: /var/run/dpdk/spdk_pid428303 00:44:23.912 Removing: /var/run/dpdk/spdk_pid429807 00:44:23.912 Removing: /var/run/dpdk/spdk_pid434205 00:44:23.912 Removing: /var/run/dpdk/spdk_pid438455 00:44:23.912 Removing: /var/run/dpdk/spdk_pid442393 00:44:23.912 Removing: /var/run/dpdk/spdk_pid449651 00:44:23.912 Removing: /var/run/dpdk/spdk_pid449653 00:44:23.912 Removing: /var/run/dpdk/spdk_pid454786 00:44:23.912 Removing: /var/run/dpdk/spdk_pid455001 00:44:23.912 Removing: /var/run/dpdk/spdk_pid455192 00:44:23.912 Removing: /var/run/dpdk/spdk_pid455471 00:44:23.912 Removing: /var/run/dpdk/spdk_pid455518 00:44:23.912 Removing: /var/run/dpdk/spdk_pid456879 00:44:23.912 Removing: /var/run/dpdk/spdk_pid458618 00:44:23.912 Removing: /var/run/dpdk/spdk_pid460173 00:44:23.912 Removing: /var/run/dpdk/spdk_pid461824 00:44:23.912 Removing: /var/run/dpdk/spdk_pid463490 00:44:23.912 Removing: /var/run/dpdk/spdk_pid465050 00:44:23.912 Removing: /var/run/dpdk/spdk_pid470791 00:44:23.912 Removing: /var/run/dpdk/spdk_pid471346 00:44:23.912 Removing: /var/run/dpdk/spdk_pid473115 00:44:23.912 Removing: /var/run/dpdk/spdk_pid474063 00:44:23.912 Removing: /var/run/dpdk/spdk_pid479824 00:44:23.912 Removing: /var/run/dpdk/spdk_pid482331 00:44:23.912 Removing: /var/run/dpdk/spdk_pid487630 00:44:23.912 Removing: /var/run/dpdk/spdk_pid493427 00:44:23.912 Removing: /var/run/dpdk/spdk_pid501967 00:44:23.912 Removing: /var/run/dpdk/spdk_pid509053 00:44:23.912 Removing: /var/run/dpdk/spdk_pid509055 00:44:23.912 Removing: /var/run/dpdk/spdk_pid527703 00:44:23.912 Removing: /var/run/dpdk/spdk_pid528168 00:44:23.912 Removing: /var/run/dpdk/spdk_pid528631 00:44:23.912 Removing: /var/run/dpdk/spdk_pid529296 00:44:23.912 Removing: /var/run/dpdk/spdk_pid529921 00:44:23.912 Removing: /var/run/dpdk/spdk_pid530476 00:44:23.912 Removing: /var/run/dpdk/spdk_pid530949 00:44:23.912 Removing: /var/run/dpdk/spdk_pid531530 00:44:23.912 Removing: /var/run/dpdk/spdk_pid535719 00:44:23.912 Removing: /var/run/dpdk/spdk_pid535942 00:44:23.912 Removing: /var/run/dpdk/spdk_pid542278 00:44:23.912 Removing: /var/run/dpdk/spdk_pid542328 00:44:23.912 Removing: /var/run/dpdk/spdk_pid547686 00:44:23.912 Removing: /var/run/dpdk/spdk_pid551838 00:44:23.912 Removing: /var/run/dpdk/spdk_pid561362 00:44:23.912 Removing: /var/run/dpdk/spdk_pid561955 00:44:23.912 Removing: /var/run/dpdk/spdk_pid565996 00:44:23.912 Removing: /var/run/dpdk/spdk_pid566234 00:44:23.912 Removing: /var/run/dpdk/spdk_pid570400 00:44:23.912 Removing: /var/run/dpdk/spdk_pid575915 00:44:23.912 Removing: /var/run/dpdk/spdk_pid578425 00:44:24.171 Removing: /var/run/dpdk/spdk_pid588693 00:44:24.171 Removing: /var/run/dpdk/spdk_pid597190 00:44:24.171 Removing: /var/run/dpdk/spdk_pid598832 00:44:24.171 Removing: /var/run/dpdk/spdk_pid599647 00:44:24.171 Removing: /var/run/dpdk/spdk_pid615472 00:44:24.171 Removing: /var/run/dpdk/spdk_pid619215 00:44:24.171 Removing: /var/run/dpdk/spdk_pid621970 00:44:24.171 Removing: /var/run/dpdk/spdk_pid630135 00:44:24.171 Removing: /var/run/dpdk/spdk_pid630140 00:44:24.171 Removing: /var/run/dpdk/spdk_pid635085 00:44:24.171 Removing: /var/run/dpdk/spdk_pid636998 00:44:24.171 Removing: /var/run/dpdk/spdk_pid638782 00:44:24.171 Removing: /var/run/dpdk/spdk_pid639934 00:44:24.171 Removing: /var/run/dpdk/spdk_pid641852 00:44:24.171 Removing: /var/run/dpdk/spdk_pid642939 00:44:24.171 Removing: /var/run/dpdk/spdk_pid651465 00:44:24.171 Removing: /var/run/dpdk/spdk_pid652071 00:44:24.171 Removing: /var/run/dpdk/spdk_pid652558 00:44:24.171 Removing: /var/run/dpdk/spdk_pid654784 00:44:24.171 Removing: /var/run/dpdk/spdk_pid655237 00:44:24.171 Removing: /var/run/dpdk/spdk_pid655693 00:44:24.171 Removing: /var/run/dpdk/spdk_pid659441 00:44:24.171 Removing: /var/run/dpdk/spdk_pid659446 00:44:24.171 Removing: /var/run/dpdk/spdk_pid660924 00:44:24.171 Removing: /var/run/dpdk/spdk_pid661469 00:44:24.171 Removing: /var/run/dpdk/spdk_pid661475 00:44:24.171 Clean 00:44:24.171 22:53:44 -- common/autotest_common.sh@1453 -- # return 0 00:44:24.171 22:53:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:24.171 22:53:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:24.171 22:53:44 -- common/autotest_common.sh@10 -- # set +x 00:44:24.171 22:53:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:24.171 22:53:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:24.171 22:53:45 -- common/autotest_common.sh@10 -- # set +x 00:44:24.171 22:53:45 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:24.171 22:53:45 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:24.171 22:53:45 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:24.430 22:53:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:24.430 22:53:45 -- spdk/autotest.sh@398 -- # hostname 00:44:24.430 22:53:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:24.430 geninfo: WARNING: invalid characters removed from testname! 00:44:46.367 22:54:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:48.271 22:54:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:50.175 22:54:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:52.080 22:54:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:53.457 22:54:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:55.366 22:54:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:57.270 22:54:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:57.270 22:54:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:57.270 22:54:18 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:57.270 22:54:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:57.270 22:54:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:57.270 22:54:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:57.270 + [[ -n 7548 ]] 00:44:57.270 + sudo kill 7548 00:44:57.281 [Pipeline] } 00:44:57.296 [Pipeline] // stage 00:44:57.301 [Pipeline] } 00:44:57.315 [Pipeline] // timeout 00:44:57.321 [Pipeline] } 00:44:57.335 [Pipeline] // catchError 00:44:57.340 [Pipeline] } 00:44:57.355 [Pipeline] // wrap 00:44:57.361 [Pipeline] } 00:44:57.374 [Pipeline] // catchError 00:44:57.383 [Pipeline] stage 00:44:57.385 [Pipeline] { (Epilogue) 00:44:57.397 [Pipeline] catchError 00:44:57.399 [Pipeline] { 00:44:57.412 [Pipeline] echo 00:44:57.414 Cleanup processes 00:44:57.419 [Pipeline] sh 00:44:57.707 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:57.707 673703 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:57.721 [Pipeline] sh 00:44:58.006 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.006 ++ grep -v 'sudo pgrep' 00:44:58.006 ++ awk '{print $1}' 00:44:58.006 + sudo kill -9 00:44:58.006 + true 00:44:58.018 [Pipeline] sh 00:44:58.303 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:10.649 [Pipeline] sh 00:45:10.934 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:10.934 Artifacts sizes are good 00:45:10.947 [Pipeline] archiveArtifacts 00:45:10.953 Archiving artifacts 00:45:11.324 [Pipeline] sh 00:45:11.609 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:11.623 [Pipeline] cleanWs 00:45:11.633 [WS-CLEANUP] Deleting project workspace... 00:45:11.633 [WS-CLEANUP] Deferred wipeout is used... 00:45:11.640 [WS-CLEANUP] done 00:45:11.642 [Pipeline] } 00:45:11.659 [Pipeline] // catchError 00:45:11.671 [Pipeline] sh 00:45:11.954 + logger -p user.info -t JENKINS-CI 00:45:11.963 [Pipeline] } 00:45:11.977 [Pipeline] // stage 00:45:11.983 [Pipeline] } 00:45:11.997 [Pipeline] // node 00:45:12.003 [Pipeline] End of Pipeline 00:45:12.076 Finished: SUCCESS